WorldWideScience

Sample records for source model derived

  1. Computing Pathways in Bio-Models Derived from Bio-Science Text Sources

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Nilsson, Jørgen Fischer

    2015-01-01

    This paper outlines a system, OntoScape, serving to accomplish complex inference tasks on knowledge bases and bio-models derived from life-science text corpora. The system applies so-called natural logic, a form of logic which is readable for humans. This logic affords ontological representations...... of complex terms appearing in the text sources. Along with logical propositions, the system applies a semantic graph representation facilitating calculation of bio-pathways. More generally, the system aords means of query answering appealing to general and domain specic inference rules....

  2. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  3. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  4. Pathway computation in models derived from bio-science text sources

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Jensen, Per Anker

    2017-01-01

    This paper outlines a system, OntoScape, serving to accomplish complex inference tasks on knowledge bases and bio-models derived from life-science text corpora. The system applies so-called natural logic, a form of logic which is readable for humans. This logic affords ontological representations...

  5. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  6. A statistical model for deriving probability distributions of contamination for accidental releases

    International Nuclear Information System (INIS)

    ApSimon, H.M.; Davison, A.C.

    1986-01-01

    Results generated from a detailed long-range transport model, MESOS, simulating dispersal of a large number of hypothetical releases of radionuclides in a variety of meteorological situations over Western Europe have been used to derive a simpler statistical model, MESOSTAT. This model may be used to generate probability distributions of different levels of contamination at a receptor point 100-1000 km or so from the source (for example, across a frontier in another country) without considering individual release and dispersal scenarios. The model is embodied in a series of equations involving parameters which are determined from such factors as distance between source and receptor, nuclide decay and deposition characteristics, release duration, and geostrophic windrose at the source. Suitable geostrophic windrose data have been derived for source locations covering Western Europe. Special attention has been paid to the relatively improbable extreme values of contamination at the top end of the distribution. The MESOSTAT model and its development are described, with illustrations of its use and comparison with the original more detailed modelling techniques. (author)

  7. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  8. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  9. Derivative interactions and perturbative UV contributions in N Higgs doublet models

    Energy Technology Data Exchange (ETDEWEB)

    Kikuta, Yohei [KEK Theory Center, KEK, Tsukuba (Japan); The Graduate University for Advanced Studies, Department of Particle and Nuclear Physics, Tsukuba (Japan); Yamamoto, Yasuhiro [Universidad de Granada, Deportamento de Fisica Teorica y del Cosmos, Facultad de Ciencias and CAFPE, Granada (Spain)

    2016-05-15

    We study the Higgs derivative interactions on models including arbitrary number of the Higgs doublets. These interactions are generated by two ways. One is higher order corrections of composite Higgs models, and the other is integration of heavy scalars and vectors. In the latter case, three point couplings between the Higgs doublets and these heavy states are the sources of the derivative interactions. Their representations are constrained to couple with the doublets. We explicitly calculate all derivative interactions generated by integrating out. Their degrees of freedom and conditions to impose the custodial symmetry are discussed. We also study the vector boson scattering processes with a couple of two Higgs doublet models to see experimental signals of the derivative interactions. They are differently affected by each heavy field. (orig.)

  10. Xiphoid Process-Derived Chondrocytes: A Novel Cell Source for Elastic Cartilage Regeneration

    Science.gov (United States)

    Nam, Seungwoo; Cho, Wheemoon; Cho, Hyunji; Lee, Jungsun

    2014-01-01

    Reconstruction of elastic cartilage requires a source of chondrocytes that display a reliable differentiation tendency. Predetermined tissue progenitor cells are ideal candidates for meeting this need; however, it is difficult to obtain donor elastic cartilage tissue because most elastic cartilage serves important functions or forms external structures, making these tissues indispensable. We found vestigial cartilage tissue in xiphoid processes and characterized it as hyaline cartilage in the proximal region and elastic cartilage in the distal region. Xiphoid process-derived chondrocytes (XCs) showed superb in vitro expansion ability based on colony-forming unit fibroblast assays, cell yield, and cumulative cell growth. On induction of differentiation into mesenchymal lineages, XCs showed a strong tendency toward chondrogenic differentiation. An examination of the tissue-specific regeneration capacity of XCs in a subcutaneous-transplantation model and autologous chondrocyte implantation model confirmed reliable regeneration of elastic cartilage regardless of the implantation environment. On the basis of these observations, we conclude that xiphoid process cartilage, the only elastic cartilage tissue source that can be obtained without destroying external shape or function, is a source of elastic chondrocytes that show superb in vitro expansion and reliable differentiation capacity. These findings indicate that XCs could be a valuable cell source for reconstruction of elastic cartilage. PMID:25205841

  11. Distinct transmissibility features of TSE sources derived from ruminant prion diseases by the oral route in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein.

    Directory of Open Access Journals (Sweden)

    Jean-Noël Arsac

    Full Text Available Transmissible spongiform encephalopathies (TSEs are a group of fatal neurodegenerative diseases associated with a misfolded form of host-encoded prion protein (PrP. Some of them, such as classical bovine spongiform encephalopathy in cattle (BSE, transmissible mink encephalopathy (TME, kuru and variant Creutzfeldt-Jakob disease in humans, are acquired by the oral route exposure to infected tissues. We investigated the possible transmission by the oral route of a panel of strains derived from ruminant prion diseases in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein (A136R154Q171 under the control of the neuron-specific enolase promoter. Sources derived from Nor98, CH1641 or 87V scrapie sources, as well as sources derived from L-type BSE or cattle-passaged TME, failed to transmit by the oral route, whereas those derived from classical BSE and classical scrapie were successfully transmitted. Apart from a possible effect of passage history of the TSE agent in the inocula, this implied the occurrence of subtle molecular changes in the protease-resistant prion protein (PrPres following oral transmission that can raises concerns about our ability to correctly identify sheep that might be orally infected by the BSE agent in the field. Our results provide proof of principle that transgenic mouse models can be used to examine the transmissibility of TSE agents by the oral route, providing novel insights regarding the pathogenesis of prion diseases.

  12. Coda-derived source spectra, moment magnitudes and energy-moment scaling in the western Alps

    Science.gov (United States)

    Morasca, P.; Mayeda, K.; Malagnini, L.; Walter, William R.

    2005-01-01

    A stable estimate of the earthquake source spectra in the western Alps is obtained using an empirical method based on coda envelope amplitude measurements described by Mayeda et al. for events ranging between MW~ 1.0 and ~5.0. Path corrections for consecutive narrow frequency bands ranging between 0.3 and 25.0 Hz were included using a simple 1-D model for five three-component stations of the Regional Seismic network of Northwestern Italy (RSNI). The 1-D assumption performs well, even though the region is characterized by a complex structural setting involving strong lateral variations in the Moho depth. For frequencies less than 1.0 Hz, we tied our dimensionless, distance-corrected coda amplitudes to an absolute scale in units of dyne cm by using independent moment magnitudes from long-period waveform modelling for three moderate magnitude events in the region. For the higher frequencies, we used small events as empirical Green's functions, with corner frequencies above 25.0 Hz. For each station, the procedure yields frequency-dependent corrections that account for site effects, including those related to fmax, as well as to S-to-coda transfer function effects. After the calibration was completed, the corrections were applied to the entire data set composed of 957 events. Our findings using the coda-derived source spectra are summarized as follows: (i) we derived stable estimates of seismic moment, M0, (and hence MW) as well as radiated S-wave energy, (ES), from waveforms recorded by as few as one station, for events that were too small to be waveform modelled (i.e. events less than MW~ 3.5); (ii) the source spectra were used to derive an equivalent local magnitude, ML(coda), that is in excellent agreement with the network averaged values using direct S waves; (iii) scaled energy, , where ER, the radiated seismic energy, is comparable to results from other tectonically active regions (e.g. western USA, Japan) and supports the idea that there is a fundamental

  13. Heuristic derivation of the Rossi-alpha formula for a pulsed neutron source

    International Nuclear Information System (INIS)

    Baeten, P.

    2004-01-01

    Expressions for the Rossi-alpha distribution for a pulsed neutron source were derived using a heuristic derivation based on the method of joint detection probability. This heuristic technique was chosen over the more rigorous master equation method due to its simplicity and the complementary of both techniques. The derived equations also take into account the presence of delayed neutrons and intrinsic neutron sources which often cannot be neglected in source-driven subcritical cores. The obtained expressions showed that the ratio of the correlated to the uncorrelated signal in the Rossi-Alpha distribution for a Pulsed Source (RAPS) was strongly increased compared to the case for a standard Rossi-alpha distribution for a continuous source. It was also demonstrated that by using this RAPS technique four independent measurement quantities, instead of three with the standard Rossi-alpha technique, can be determined. Hence, it is no longer necessary to combine the Rossi-alpha technique with another method to measure the reactivity expressed in dollars. Both properties, the increased signal-to-noise ratio of the correlated signal and the measurement of a fourth measurement quantity, make that the RAPS technique is an excellent candidate for the measurement of kinetic parameters in source-driven subcritical assemblies

  14. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  15. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  16. Quantification of source-term profiles from near-field geochemical models

    International Nuclear Information System (INIS)

    McKinley, I.G.

    1985-01-01

    A geochemical model of the near-field is described which quantitatively treats the processes of engineered barrier degradation, buffering of aqueous chemistry by solid phases, nuclide solubilization and transport through the near-field and release to the far-field. The radionuclide source-terms derived from this model are compared with those from a simpler model used for repository safety analysis. 10 refs., 2 figs., 2 tabs

  17. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  18. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    Science.gov (United States)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  19. On the derivation of approximations to cellular automata models and the assumption of independence.

    Science.gov (United States)

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Deriving profiles of incident and scattered neutrons for TOF experiments with the spallation sources

    International Nuclear Information System (INIS)

    Watanabe, Hidehiro

    1993-01-01

    A formula that closely matches the incident profile of epi-thermal and thermal neutrons for time of flight experiments carried out with a spallation neutron source and moderator scheme is derived based on the slowing-down and diffusing-out processes in a moderator. This analytical description also enables us to predict burst-function profiles; these profiles are verified by a comparison with a diffraction pattern. The limits of the analytical model are discussed through the predictable peak position shift brought about by the slowing-down process. (orig.)

  1. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  2. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  3. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  4. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  5. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  6. BRIEF COMMENTS REGARDING THE INDIRECT (OR DERIVED) SOURCES OF LABOR LAW

    OpenAIRE

    Brîndușa Vartolomei

    2015-01-01

    In the field of the law governing the legal work relations one of the features that also contributes to defining the autonomy of labor law is that of the existence of the specific sources of law consisting in regulation on the functioning of the employer, internal regulation, collective labor agreement, and instructions regarding the security and labor health. In addition, in the practical field of the labor relationssome indirect (or derived) sources of law were also pointed out ...

  7. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  8. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    Science.gov (United States)

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  9. Marine-derived fungi as a source of proteases

    Digital Repository Service at National Institute of Oceanography (India)

    Kamat, T.; Rodrigues, C.; Naik, C.G.

    , of marine-derived fungi in order to identify the potential sources. Sponge and corals were collected by SCUBA diving, from a depth of 8 to 10 m from the coastal waters of Mandapam, Tamil Nadu (9"16' N; 79"liE). The samples comprised of a soft coral Sinularia... pieces of approximately 2x2 cm were cut out aseptically. These fourteen pieces of each organism were subjected to two different treatments 23 • In the first case seven pieces were vortexed four times, for 20 seconds each, with sterile seawater while...

  10. Modeling of negative ion extraction from a magnetized plasma source: Derivation of scaling laws and description of the origins of aberrations in the ion beam

    Science.gov (United States)

    Fubiani, G.; Garrigues, L.; Boeuf, J. P.

    2018-02-01

    We model the extraction of negative ions from a high brightness high power magnetized negative ion source. The model is a Particle-In-Cell (PIC) algorithm with Monte-Carlo Collisions. The negative ions are generated only on the plasma grid surface (which separates the plasma from the electrostatic accelerator downstream). The scope of this work is to derive scaling laws for the negative ion beam properties versus the extraction voltage (potential of the first grid of the accelerator) and plasma density and investigate the origins of aberrations on the ion beam. We show that a given value of the negative ion beam perveance correlates rather well with the beam profile on the extraction grid independent of the simulated plasma density. Furthermore, the extracted beam current may be scaled to any value of the plasma density. The scaling factor must be derived numerically but the overall gain of computational cost compared to performing a PIC simulation at the real plasma density is significant. Aberrations appear for a meniscus curvature radius of the order of the radius of the grid aperture. These aberrations cannot be cancelled out by switching to a chamfered grid aperture (as in the case of positive ions).

  11. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Y. [CancerCare Manitoba (Canada); University of Calgary (Canada); Sommerville, M.; Johnstone, C.D. [San Diego State University (United States); Gräfe, J.; Nygren, I.; Jacso, F. [Tom Baker Cancer Centre (Canada); Khan, R.; Villareal-Barajas, J.E. [University of Calgary (Canada); Tom Baker Cancer Centre (Canada); Tambasco, M. [University of Calgary (Canada); San Diego State University (United States)

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  12. Derivation of the source term, dose results and associated radiological consequences for the Greek Research Reactor – 1

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, Charalampos, E-mail: chpappas@ipta.demokritos.gr; Ikonomopoulos, Andreas; Sfetsos, Athanasios; Andronopoulos, Spyros; Varvayanni, Melpomeni; Catsaros, Nicolas

    2014-07-01

    Highlights: • Source term derivation of postulated accident sequences in a research reactor. • Various containment ventilation scenarios considered for source term calculations. • Source term parametric analysis performed in case of lack of ventilation. • JRODOS employed for dose calculations under eighteen modeled scenarios. • Estimation of radiological consequences during typical and adverse weather scenarios. - Abstract: The estimated source term, dose results and radiological consequences of selected accident sequences in the Greek Research Reactor – 1 are presented and discussed. A systematic approach has been adopted to perform the necessary calculations in accordance with the latest computational developments and IAEA recommendations. Loss-of-coolant, reactivity insertion and fuel channel blockage accident sequences have been selected to derive the associated source terms under three distinct containment ventilation scenarios. Core damage has been conservatively assessed for each accident sequence while the ventilation has been assumed to function within the efficiency limits defined at the Safety Analysis Report. In case of lack of ventilation a parametric analysis is also performed to examine the dependency of the source term on the containment leakage rate. A typical as well as an adverse meteorological scenario have been defined in the JRODOS computational platform in order to predict the effective, lung and thyroid doses within a region defined by a 15 km radius downwind from the reactor building. The radiological consequences of the eighteen scenarios associated with the accident sequences are presented and discussed.

  13. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  14. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  15. Bone marrow-derived stromal cells are more beneficial cell sources for tooth regeneration compared with adipose-derived stromal cells.

    Science.gov (United States)

    Ye, Lanfeng; Chen, Lin; Feng, Fan; Cui, Junhui; Li, Kaide; Li, Zhiyong; Liu, Lei

    2015-10-01

    Tooth loss is presently a global epidemic and tooth regeneration is thought to be a feasible and ideal treatment approach. Choice of cell source is a primary concern in tooth regeneration. In this study, the odontogenic differentiation potential of two non-dental-derived stem cells, adipose-derived stromal cells (ADSCs) and bone marrow-derived stromal cells (BMSCs), were evaluated both in vitro and in vivo. ADSCs and BMSCs were induced in vitro in the presence of tooth germ cell-conditioned medium (TGC-CM) prior to implantation into the omentum majus of rats, in combination with inactivated dentin matrix (IDM). Real-time quantitative polymerase chain reaction (RT-qPCR) was used to detect the mRNA expression levels of odontogenic-related genes. Immunofluorescence and immunohistochemical assays were used to detect the protein levels of odontogenic-specific genes, such as DSP and DMP-1 both in vitro and in vivo. The results suggest that both ADSCs and BMSCs have odontogenic differentiation potential. However, the odontogenic potential of BMSCs was greater compared with ADSCs, showing that BMSCs are a more appropriate cell source for tooth regeneration. © 2015 International Federation for Cell Biology.

  16. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    Science.gov (United States)

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  17. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  18. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  19. Source modelling in seismic risk analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Yucemen, M.S.

    1978-12-01

    The proposed probabilistic procedure provides a consistent method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. The potential earthquake activity zones are idealized as point, line or area sources. For these seismic source types, expressions to evaluate their contribution to seismic risk are derived, considering all the possible site-source configurations. The seismic risk at a site is found to depend not only on the inherent randomness of the earthquake occurrences with respect to magnitude, time and space, but also on the uncertainties associated with the predicted values of the seismic and geometric parameters, as well as the uncertainty in the attenuation model. The uncertainty due to the attenuation equation is incorporated into the analysis through the use of random correction factors. The influence of the uncertainty resulting from the insufficient information on the seismic parameters and source geometry is introduced into the analysis by computing a mean risk curve averaged over the various alternative assumptions on the parameters and source geometry. Seismic risk analysis is carried for the city of Denizli, which is located in the seismically most active zone of Turkey. The second analysis is for Akkuyu

  20. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  1. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  2. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    Science.gov (United States)

    Shah, Sandeep N; Gelderman, Monique P; Lewis, Emily M A; Farrel, John; Wood, Francine; Strader, Michael Brad; Alayash, Abdu I; Vostal, Jaroslav G

    2016-01-01

    Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs) is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs) is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle) under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  3. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    Directory of Open Access Journals (Sweden)

    Sandeep N Shah

    Full Text Available Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  4. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  5. Source model for the 1997 Zirkuh earthquake (MW= 7.2) in Iran derived from JERS and ERS InSAR observations

    KAUST Repository

    Sudhaus, Henriette

    2011-05-01

    We present the first detailed source model of the 1997 M7.2 Zirkuh earthquake that ruptured the entire Abiz fault in East Iran producing a 125 km long, bended and segmented fault trace. Using SAR data from the ERS and JERS-1 satellites we first determined a multisegment fault model for this predominately strike-slip earthquake by estimating fault-segment dip, slip, and rake values using an evolutionary optimization algorithm. We then inverted the InSAR data for variable slip and rake in more detail along the multisegment fault plane. We complement our optimization with importance sampling of the model parameter space to ensure that the derived optimum model has a high likelihood, to detect correlations or trade-offs between model parameters, and to image the model resolution. Our results are in an agreement with field observations showing that this predominantly strike-slip earthquake had a clear change in style of faulting along its rupture. In the north we find that thrust faulting on a westerly dipping fault is accompanied with the strike-slip that changes to thrust faulting on an eastward dipping fault plane in the south. The centre part of the fault is vertical and has almost pure dextral strike-slip. The heterogeneous fault slip distribution shows two regions of low slip near significant fault step-overs of the Abiz fault and therefore these fault complexities appear to reduce the fault slip. Furthermore, shallow fault slip is generally reduced with respect to slip at depth. This shallow slip deficit varies along the Zirkuh fault from a small deficit in the North to a much larger deficit along the central part of the fault, a variation that is possibly related to different interseismic repose times.

  6. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    Science.gov (United States)

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  7. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    Science.gov (United States)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good

  8. Evaluation of model-simulated source contributions to tropospheric ozone with aircraft observations in the factor-projected space

    Directory of Open Access Journals (Sweden)

    Y. Yoshida

    2008-03-01

    Full Text Available Trace gas measurements of TOPSE and TRACE-P experiments and corresponding global GEOS-Chem model simulations are analyzed with the Positive Matrix Factorization (PMF method for model evaluation purposes. Specially, we evaluate the model simulated contributions to O3 variability from stratospheric transport, intercontinental transport, and production from urban/industry and biomass burning/biogenic sources. We select a suite of relatively long-lived tracers, including 7 chemicals (O3, NOy, PAN, CO, C3H8, CH3Cl, and 7Be and 1 dynamic tracer (potential temperature. The largest discrepancy is found in the stratospheric contribution to 7Be. The model underestimates this contribution by a factor of 2–3, corresponding well to a reduction of 7Be source by the same magnitude in the default setup of the standard GEOS-Chem model. In contrast, we find that the simulated O3 contributions from stratospheric transport are in reasonable agreement with those derived from the measurements. However, the springtime increasing trend over North America derived from the measurements are largely underestimated in the model, indicating that the magnitude of simulated stratospheric O3 source is reasonable but the temporal distribution needs improvement. The simulated O3 contributions from long-range transport and production from urban/industry and biomass burning/biogenic emissions are also in reasonable agreement with those derived from the measurements, although significant discrepancies are found for some regions.

  9. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2013-01-01

    Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...

  10. Modelling ocean-colour-derived chlorophyll a

    Directory of Open Access Journals (Sweden)

    S. Dutkiewicz

    2018-01-01

    Full Text Available This article provides a proof of concept for using a biogeochemical/ecosystem/optical model with a radiative transfer component as a laboratory to explore aspects of ocean colour. We focus here on the satellite ocean colour chlorophyll a (Chl a product provided by the often-used blue/green reflectance ratio algorithm. The model produces output that can be compared directly to the real-world ocean colour remotely sensed reflectance. This model output can then be used to produce an ocean colour satellite-like Chl a product using an algorithm linking the blue versus green reflectance similar to that used for the real world. Given that the model includes complete knowledge of the (model water constituents, optics and reflectance, we can explore uncertainties and their causes in this proxy for Chl a (called derived Chl a in this paper. We compare the derived Chl a to the actual model Chl a field. In the model we find that the mean absolute bias due to the algorithm is 22 % between derived and actual Chl a. The real-world algorithm is found using concurrent in situ measurement of Chl a and radiometry. We ask whether increased in situ measurements to train the algorithm would improve the algorithm, and find a mixed result. There is a global overall improvement, but at the expense of some regions, especially in lower latitudes where the biases increase. Not surprisingly, we find that region-specific algorithms provide a significant improvement, at least in the annual mean. However, in the model, we find that no matter how the algorithm coefficients are found there can be a temporal mismatch between the derived Chl a and the actual Chl a. These mismatches stem from temporal decoupling between Chl a and other optically important water constituents (such as coloured dissolved organic matter and detrital matter. The degree of decoupling differs regionally and over time. For example, in many highly seasonal regions, the timing of initiation

  11. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  12. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  13. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Science.gov (United States)

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  14. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Novel sources of Flavor Changed Neutral Currents in the 331RHN model

    International Nuclear Information System (INIS)

    Cogollo, D.; Vital de Andrade, A.; Queiroz, F.S.; Teles, P.R.

    2012-01-01

    Sources of Flavor Changed Neutral Currents (FCNC) emerge naturally from a well motivated framework called 3-3-1 with right-handed neutrinos model, 331 RHN for short, mediated by an extra neutral gauge boson Z '. Following previous work we calculate these sources and in addition we derive new ones coming from CP-even and -odd neutral scalars which appear due to their non-diagonal interactions with the physical standard quarks. Furthermore, by using 4 texture zeros for the quark mass matrices, we derive the mass difference terms for the neutral mesons systems K 0 - anti K 0 , D 0 - anti D 0 and B 0 - anti B 0 and show that, though one can discern that the Z' contribution is the most relevant one for mesons oscillations purposes, scalar contributions play a role also in this processes and hence it is worthwhile to investigate them and derive new bounds on space of parameters. In particular, studying the B 0 - anti B 0 system we set the bounds M Z' >or similar 4.2 TeV and M S 2 ,M I 3 >or similar 7.5 TeV in order to be consistent with the current measurements. (orig.)

  16. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Doctor, P.G.; Williford, R.E.; Van Luik, A.E.

    1984-11-01

    Part of a strategy for evaluating the compliance of geologic repositories with federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative releases from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  17. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Van Luik, A.E.; Williford, R.E.; Doctor, P.G.; Pacific Northwest Lab., Richland, WA; Roy F. Weston, Inc./Rogers and Assoc. Engineering Corp., Rockville, MD)

    1984-01-01

    Part of a strategy for evaluating the compliance of geologic repositories with Federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilitistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative released from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  18. A spatial structural derivative model for ultraslow diffusion

    Directory of Open Access Journals (Sweden)

    Xu Wei

    2017-01-01

    Full Text Available This study investigates the ultraslow diffusion by a spatial structural derivative, in which the exponential function ex is selected as the structural function to construct the local structural derivative diffusion equation model. The analytical solution of the diffusion equation is a form of Biexponential distribution. Its corresponding mean squared displacement is numerically calculated, and increases more slowly than the logarithmic function of time. The local structural derivative diffusion equation with the structural function ex in space is an alternative physical and mathematical modeling model to characterize a kind of ultraslow diffusion.

  19. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    Science.gov (United States)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  20. Analyzing Korean consumers’ latent preferences for electricity generation sources with a hierarchical Bayesian logit model in a discrete choice experiment

    International Nuclear Information System (INIS)

    Byun, Hyunsuk; Lee, Chul-Yong

    2017-01-01

    Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.

  1. Optimum load distribution between heat sources based on the Cournot model

    Science.gov (United States)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  2. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    Science.gov (United States)

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID

  3. Structure activity relationships of quinoxalin-2-one derivatives as platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors, derived from molecular modeling.

    Science.gov (United States)

    Mori, Yoshikazu; Hirokawa, Takatsugu; Aoki, Katsuyuki; Satomi, Hisanori; Takeda, Shuichi; Aburada, Masaki; Miyamoto, Ken-ichi

    2008-05-01

    We previously reported a quinoxalin-2-one compound (Compound 1) that had inhibitory activity equivalent to existing platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors. Lead optimization of Compound 1 to increase its activity and selectivity, using structural information regarding PDGFbeta R-ligand interactions, is urgently needed. Here we present models of the PDGFbeta R kinase domain complexed with quinoxalin-2-one derivatives. The models were constructed using comparative modeling, molecular dynamics (MD) and ligand docking. In particular, conformations derived from MD, and ligand binding site information presented by alpha-spheres in the pre-docking processing, allowed us to identify optimal protein structures for docking of target ligands. By carrying out molecular modeling and MD of PDGFbeta R in its inactive state, we obtained two structural models having good Compound 1 binding potentials. In order to distinguish the optimal candidate, we evaluated the structural activity relationships (SAR) between the ligand-binding free energies and inhibitory activity values (IC50 values) for available quinoxalin-2-one derivatives. Consequently, a final model with a high SAR was identified. This model included a molecular interaction between the hydrophobic pocket behind the ATP binding site and the substitution region of the quinoxalin-2-one derivatives. These findings should prove useful in lead optimization of quinoxalin-2-one derivatives as PDGFb R inhibitors.

  4. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  5. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  6. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  7. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    Science.gov (United States)

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  8. Novel family of quasi-Z-source DC/DC converters derived from current-fed push-pull converters

    DEFF Research Database (Denmark)

    Chub, Andrii; Husev, Oleksandr; Vinnikov, Dmitri

    2014-01-01

    This paper is devoted to the step-up quasi-Z-source dc/dc push-pull converter family. The topologies in the family are derived from the isolated boost converter family by replacing input inductors with the quasi-Z-source network. Two new topologies are proposed, analyzed and compared. Theoretical...

  9. Precision Orbit Derived Atmospheric Density: Development and Performance

    Science.gov (United States)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer

  10. United States‐Mexican border watershed assessment: Modeling nonpoint source pollution in Ambos Nogales

    Science.gov (United States)

    Norman, Laura M.

    2007-01-01

    Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.

  11. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  12. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations

    Directory of Open Access Journals (Sweden)

    Hardstaff Joanne L

    2012-06-01

    Full Text Available Abstract Background The persistence of bovine TB (bTB in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles. The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. Results The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6–8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. Conclusions External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to

  13. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations.

    Science.gov (United States)

    Hardstaff, Joanne L; Bulling, Mark T; Marion, Glenn; Hutchings, Michael R; White, Piran C L

    2012-06-27

    The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6-8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such

  14. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  15. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  16. Source Rupture Process of the 2016 Kumamoto Prefecture, Japan, Earthquake Derived from Near-Source Strong-Motion Records

    Science.gov (United States)

    Zheng, A.; Zhang, W.

    2016-12-01

    On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84° and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60° respectively. And for the southern one, they are N205°E, 72° respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.

  17. Modeling of heat conduction via fractional derivatives

    Science.gov (United States)

    Fabrizio, Mauro; Giorgi, Claudio; Morro, Angelo

    2017-09-01

    The modeling of heat conduction is considered by letting the time derivative, in the Cattaneo-Maxwell equation, be replaced by a derivative of fractional order. The purpose of this new approach is to overcome some drawbacks of the Cattaneo-Maxwell equation, for instance possible fluctuations which violate the non-negativity of the absolute temperature. Consistency with thermodynamics is shown to hold for a suitable free energy potential, that is in fact a functional of the summed history of the heat flux, subject to a suitable restriction on the set of admissible histories. Compatibility with wave propagation at a finite speed is investigated in connection with temperature-rate waves. It follows that though, as expected, this is the case for the Cattaneo-Maxwell equation, the model involving the fractional derivative does not allow the propagation at a finite speed. Nevertheless, this new model provides a good description of wave-like profiles in thermal propagation phenomena, whereas Fourier's law does not.

  18. How organic carbon derived from multiple sources contributes to carbon sequestration processes in a shallow coastal system?

    Science.gov (United States)

    Watanabe, Kenta; Kuwae, Tomohiro

    2015-04-16

    Carbon captured by marine organisms helps sequester atmospheric CO 2 , especially in shallow coastal ecosystems, where rates of primary production and burial of organic carbon (OC) from multiple sources are high. However, linkages between the dynamics of OC derived from multiple sources and carbon sequestration are poorly understood. We investigated the origin (terrestrial, phytobenthos derived, and phytoplankton derived) of particulate OC (POC) and dissolved OC (DOC) in the water column and sedimentary OC using elemental, isotopic, and optical signatures in Furen Lagoon, Japan. Based on these data analysis, we explored how OC from multiple sources contributes to sequestration via storage in sediments, water column sequestration, and air-sea CO 2 exchanges, and analyzed how the contributions vary with salinity in a shallow seagrass meadow as well. The relative contribution of terrestrial POC in the water column decreased with increasing salinity, whereas autochthonous POC increased in the salinity range 10-30. Phytoplankton-derived POC dominated the water column POC (65-95%) within this salinity range; however, it was minor in the sediments (3-29%). In contrast, terrestrial and phytobenthos-derived POC were relatively minor contributors in the water column but were major contributors in the sediments (49-78% and 19-36%, respectively), indicating that terrestrial and phytobenthos-derived POC were selectively stored in the sediments. Autochthonous DOC, part of which can contribute to long-term carbon sequestration in the water column, accounted for >25% of the total water column DOC pool in the salinity range 15-30. Autochthonous OC production decreased the concentration of dissolved inorganic carbon in the water column and thereby contributed to atmospheric CO 2 uptake, except in the low-salinity zone. Our results indicate that shallow coastal ecosystems function not only as transition zones between land and ocean but also as carbon sequestration filters. They

  19. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  20. Identifying the source, transport path and sinks of sewage derived organic matter

    International Nuclear Information System (INIS)

    Mudge, Stephen M.; Duce, Caroline E.

    2005-01-01

    Since sewage discharges can significantly contribute to the contaminant loadings in coastal areas, it is important to identify sources, pathways and environmental sinks. Sterol and fatty alcohol biomarkers were quantified in source materials, suspended sediments and settling matter from the Ria Formosa Lagoon. Simple ratios between key biomarkers including 5β-coprostanol, cholesterol and epi-coprostanol were able to identify the sewage sources and effected deposition sites. Multivariate methods (PCA) were used to identify co-varying sites. PLS analysis using the sewage discharge as the signature indicated ∼ 25% of the variance in the sites could be predicted by the sewage signature. A new source of sewage derived organic matter was found with a high sewage predictable signature. The suspended sediments had relatively low sewage signatures as the material was diluted with other organic matter from in situ production. From a management viewpoint, PLS provides a useful tool in identifying the pathways and accumulation sites for such contaminants. - Multivariate statistical analysis was used to identify pathways and accumulation sites for contaminants in coastal waters

  1. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  2. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  3. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  4. Gaussian Plume Model Parameters for Ground-Level and Elevated Sources Derived from the Atmospheric Diffusion Equation in the Neutral and Stable Conditions

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2009-01-01

    The analytical solution of the atmospheric diffusion equation for a point source gives the ground-level concentration profiles. It depends on the wind speed ua nd vertical dispersion coefficient σ z expressed by Pasquill power laws. Both σ z and u are functions of downwind distance, stability and source elevation, while for the ground-level emission u is constant. In the neutral and stable conditions, the Gaussian plume model and finite difference numerical methods with wind speed in power law and the vertical dispersion coefficient in exponential law are estimated. This work shows that the estimated ground-level concentrations of the Gaussian model for high-level source and numerical finite difference method are very match fit to the observed ground-level concentrations of the Gaussian model

  5. Price models for oil derivates in Slovenia

    International Nuclear Information System (INIS)

    Nemac, F.; Saver, A.

    1995-01-01

    In Slovenia, a law is currently applied according to which any change in the price of oil derivatives is subject to the Governmental approval. Following the target of getting closer to the European Union, the necessity has arisen of finding ways for the introduction of liberalization or automated approach to price modifications depending on oscillations of oil derivative prices on the world market and the rate of exchange of the American dollar. It is for this reason that at the Agency for Energy Restructuring we made a study for the Ministry of Economic Affairs and Development regarding this issue. We analysed the possible models for the formation of oil derivative prices for Slovenia. Based on the assessment of experiences of primarily the west European countries, we proposed three models for the price formation for Slovenia. In future, it is expected that the Government of the Republic of Slovenia will make a selection of one of the proposed models to be followed by enforcement of price liberalization. The paper presents two representative models for price formation as used in Austria and Portugal. In the continuation the authors analyse the application of three models that they find suitable for the use in Slovenia. (author)

  6. Discrete-Time Domain Modelling of Voltage Source Inverters in Standalone Applications

    DEFF Research Database (Denmark)

    Federico, de Bosio; de Sousa Ribeiro, Luiz Antonio; Freijedo Fernandez, Francisco Daniel

    2017-01-01

    modelling of the LC plant with consideration of delay and sample-and-hold effects on the state feedback cross-coupling decoupling is derived. From this plant formulation, current controllers with wide bandwidth and good relative stability properties are developed. Two controllers based on lead compensation......The decoupling of the capacitor voltage and inductor current has been shown to improve significantly the dynamic performance of voltage source inverters in standalone applications. However, the computation and PWM delays still limit the achievable bandwidth. In this paper a discrete-time domain...

  7. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  8. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  9. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  10. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    Science.gov (United States)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  11. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    International Nuclear Information System (INIS)

    Devi, Y.D.; Kota, V.K.B.

    1993-01-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150 Nd

  12. A behavioral choice model of the use of car-sharing and ride-sourcing services

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Felipe F.; Lavieri, Patrícia S.; Garikapati, Venu M.; Astroza, Sebastian; Pendyala, Ram M.; Bhat, Chandra R.

    2017-07-26

    There are a number of disruptive mobility services that are increasingly finding their way into the marketplace. Two key examples of such services are car-sharing services and ride-sourcing services. In an effort to better understand the influence of various exogenous socio-economic and demographic variables on the frequency of use of ride-sourcing and car-sharing services, this paper presents a bivariate ordered probit model estimated on a survey data set derived from the 2014-2015 Puget Sound Regional Travel Study. Model estimation results show that users of these services tend to be young, well-educated, higher-income, working individuals residing in higher-density areas. There are significant interaction effects reflecting the influence of children and the built environment on disruptive mobility service usage. The model developed in this paper provides key insights into factors affecting market penetration of these services, and can be integrated in larger travel forecasting model systems to better predict the adoption and use of mobility-on-demand services.

  13. State-Space Modelling of Loudspeakers using Fractional Derivatives

    DEFF Research Database (Denmark)

    King, Alexander Weider; Agerkvist, Finn T.

    2015-01-01

    This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractio......This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response...... of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data...

  14. Analysis of Drude model using fractional derivatives without singular kernels

    Directory of Open Access Journals (Sweden)

    Jiménez Leonardo Martínez

    2017-11-01

    Full Text Available We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF, and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  15. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  16. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  17. Evaluation of the influence of source and spatial resolution of DEMs on derivative products used in landslide mapping

    Directory of Open Access Journals (Sweden)

    Rubini Mahalingam

    2016-11-01

    Full Text Available Landslides are a major geohazard, which result in significant human, infrastructure, and economic losses. Landslide susceptibility mapping can help communities plan and prepare for these damaging events. Digital elevation models (DEMs are one of the most important data-sets used in landslide hazard assessment. Despite their frequent use, limited research has been completed to date on how the DEM source and spatial resolution can influence the accuracy of the produced landslide susceptibility maps. The aim of this paper is to analyse the influence of spatial resolutions and source of DEMs on landslide susceptibility mapping. For this purpose, Advanced Spaceborne Thermal Emission and Reflection (ASTER, National Elevation Dataset (NED, and Light Detection and Ranging (LiDAR DEMs were obtained for two study sections of approximately 140 km2 in north-west Oregon. Each DEM was resampled to 10, 30, and 50 m and slope and aspect grids were derived for each resolution. A set of nine spatial databases was constructed using geoinformation science (GIS for each of the spatial resolution and source. Additional factors such as distance to river and fault maps were included. An analytical hierarchical process (AHP, fuzzy logic model, and likelihood ratio-AHP representing qualitative, quantitative, and hybrid landslide mapping techniques were used for generating landslide susceptibility maps. The results from each of the techniques were verified with the Cohen's kappa index, confusion matrix, and a validation index based on agreement with detailed landslide inventory maps. The spatial resolution of 10 m, derived from the LiDAR data-set showed higher predictive accuracy in all the three techniques used for producing landslide susceptibility maps. At a resolution of 10 m, the output maps based on NED and ASTER had higher misclassification compared to the LiDAR-based outputs. Further, the 30-m LiDAR output showed improved results over the 10-m NED and 10-m

  18. The SSI TOOLBOX Source Term Model SOSIM - Screening for important radionuclides and parameter sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Avila Moreno, R.; Barrdahl, R.; Haegg, C.

    1995-05-01

    The main objective of the present study was to carry out a screening and a sensitivity analysis of the SSI TOOLBOX source term model SOSIM. This model is a part of the SSI TOOLBOX for radiological impact assessment of the Swedish disposal concept for high-level waste KBS-3. The outputs of interest for this purpose were: the total released fraction, the time of total release, the time and value of maximum release rate, the dose rates after direct releases of the biosphere. The source term equations were derived and simple equations and methods were proposed for calculation of these. A literature survey has been performed in order to determine a characteristic variation range and a nominal value for each model parameter. In order to reduce the model uncertainties the authors recommend a change in the initial boundary condition for solution of the diffusion equation for highly soluble nuclides. 13 refs.

  19. Assessment of source-receptor relationships of aerosols: An integrated forward and backward modeling approach

    Science.gov (United States)

    Kulkarni, Sarika

    This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to

  20. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  1. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  2. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    Science.gov (United States)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the

  3. Vulnerable Derivatives and Good Deal Bounds: A Structural Model

    DEFF Research Database (Denmark)

    Murgoci, Agatha

    2013-01-01

    We price vulnerable derivatives -- i.e. derivatives where the counterparty may default. These are basically the derivatives traded on the over-the-counter (OTC) markets. Default is modeled in a structural framework. The technique employed for pricing is good deal bounds (GDBs). The method imposes...

  4. Comparison of the landslide susceptibility models in Taipei Water Source Domain, Taiwan

    Science.gov (United States)

    WU, C. Y.; Yeh, Y. C.; Chou, T. H.

    2017-12-01

    Taipei Water Source Domain, locating at the southeast of Taipei Metropolis, is the main source of water resource in this region. Recently, the downstream turbidity often soared significantly during the typhoon period because of the upstream landslides. The landslide susceptibilities should be analysed to assess the influence zones caused by different rainfall events, and to ensure the abilities of this domain to serve enough and quality water resource. Generally, the landslide susceptibility models can be established based on either a long-term landslide inventory or a specified landslide event. Sometimes, there is no long-term landslide inventory in some areas. Thus, the event-based landslide susceptibility models are established widely. However, the inventory-based and event-based landslide susceptibility models may result in dissimilar susceptibility maps in the same area. So the purposes of this study were to compare the landslide susceptibility maps derived from the inventory-based and event-based models, and to interpret how to select a representative event to be included in the susceptibility model. The landslide inventory from Typhoon Tim in July, 1994 and Typhoon Soudelor in August, 2015 was collected, and used to establish the inventory-based landslide susceptibility model. The landslides caused by Typhoon Nari and rainfall data were used to establish the event-based model. The results indicated the high susceptibility slope-units were located at middle upstream Nan-Shih Stream basin.

  5. Derivation and characterization of human fetal MSCs: an alternative cell source for large-scale production of cardioprotective microparticles.

    Science.gov (United States)

    Lai, Ruenn Chai; Arslan, Fatih; Tan, Soon Sim; Tan, Betty; Choo, Andre; Lee, May May; Chen, Tian Sheng; Teh, Bao Ju; Eng, John Kun Long; Sidik, Harwin; Tanavde, Vivek; Hwang, Wei Sek; Lee, Chuen Neng; El Oakley, Reida Menshawe; Pasterkamp, Gerard; de Kleijn, Dominique P V; Tan, Kok Hian; Lim, Sai Kiang

    2010-06-01

    The therapeutic effects of mesenchymal stem cells (MSCs) transplantation are increasingly thought to be mediated by MSC secretion. We have previously demonstrated that human ESC-derived MSCs (hESC-MSCs) produce cardioprotective microparticles in pig model of myocardial ischemia/reperfusion (MI/R) injury. As the safety and availability of clinical grade human ESCs remain a concern, MSCs from fetal tissue sources were evaluated as alternatives. Here we derived five MSC cultures from limb, kidney and liver tissues of three first trimester aborted fetuses and like our previously described hESC-derived MSCs; they were highly expandable and had similar telomerase activities. Each line has the potential to generate at least 10(16-19) cells or 10(7-10) doses of cardioprotective secretion for a pig model of MI/R injury. Unlike previously described fetal MSCs, they did not express pluripotency-associated markers such as Oct4, Nanog or Tra1-60. They displayed a typical MSC surface antigen profile and differentiated into adipocytes, osteocytes and chondrocytes in vitro. Global gene expression analysis by microarray and qRT-PCR revealed a typical MSC gene expression profile that was highly correlated among the five fetal MSC cultures and with that of hESC-MSCs (r(2)>0.90). Like hESC-MSCs, they produced secretion that was cardioprotective in a mouse model of MI/R injury. HPLC analysis of the secretion revealed the presence of a population of microparticles with a hydrodynamic radius of 50-65 nm. This purified population of microparticles was cardioprotective at approximately 1/10 dosage of the crude secretion. (c) 2009 Elsevier Ltd. All rights reserved.

  6. Tissue Source and Cell Expansion Condition Influence Phenotypic Changes of Adipose-Derived Stem Cells

    Science.gov (United States)

    Mangum, Lauren H.; Stone, Randolph; Wrice, Nicole L.; Larson, David A.; Florell, Kyle F.; Christy, Barbara A.; Herzig, Maryanne C.; Cap, Andrew P.

    2017-01-01

    Stem cells derived from the subcutaneous adipose tissue of debrided burned skin represent an appealing source of adipose-derived stem cells (ASCs) for regenerative medicine. Traditional tissue culture uses fetal bovine serum (FBS), which complicates utilization of ASCs in human medicine. Human platelet lysate (hPL) is one potential xeno-free, alternative supplement for use in ASC culture. In this study, adipogenic and osteogenic differentiation in media supplemented with 10% FBS or 10% hPL was compared in human ASCs derived from abdominoplasty (HAP) or from adipose associated with debrided burned skin (BH). Most (95–99%) cells cultured in FBS were stained positive for CD73, CD90, CD105, and CD142. FBS supplementation was associated with increased triglyceride content and expression of adipogenic genes. Culture in hPL significantly decreased surface staining of CD105 by 31% and 48% and CD142 by 27% and 35% in HAP and BH, respectively (p < 0.05). Culture of BH-ASCs in hPL also increased expression of markers of osteogenesis and increased ALP activity. These data indicate that application of ASCs for wound healing may be influenced by ASC source as well as culture conditions used to expand them. As such, these factors must be taken into consideration before ASCs are used for regenerative purposes. PMID:29138638

  7. Tissue Source and Cell Expansion Condition Influence Phenotypic Changes of Adipose-Derived Stem Cells

    Directory of Open Access Journals (Sweden)

    Lauren H. Mangum

    2017-01-01

    Full Text Available Stem cells derived from the subcutaneous adipose tissue of debrided burned skin represent an appealing source of adipose-derived stem cells (ASCs for regenerative medicine. Traditional tissue culture uses fetal bovine serum (FBS, which complicates utilization of ASCs in human medicine. Human platelet lysate (hPL is one potential xeno-free, alternative supplement for use in ASC culture. In this study, adipogenic and osteogenic differentiation in media supplemented with 10% FBS or 10% hPL was compared in human ASCs derived from abdominoplasty (HAP or from adipose associated with debrided burned skin (BH. Most (95–99% cells cultured in FBS were stained positive for CD73, CD90, CD105, and CD142. FBS supplementation was associated with increased triglyceride content and expression of adipogenic genes. Culture in hPL significantly decreased surface staining of CD105 by 31% and 48% and CD142 by 27% and 35% in HAP and BH, respectively (p<0.05. Culture of BH-ASCs in hPL also increased expression of markers of osteogenesis and increased ALP activity. These data indicate that application of ASCs for wound healing may be influenced by ASC source as well as culture conditions used to expand them. As such, these factors must be taken into consideration before ASCs are used for regenerative purposes.

  8. Induced pluripotent stem cells (iPSCs) derived from different cell sources and their potential for regenerative and personalized medicine.

    Science.gov (United States)

    Shtrichman, R; Germanguz, I; Itskovitz-Eldor, J

    2013-06-01

    Human induced pluripotent stem cells (hiPSCs) have great potential as a robust source of progenitors for regenerative medicine. The novel technology also enables the derivation of patient-specific cells for applications to personalized medicine, such as for personal drug screening and toxicology. However, the biological characteristics of iPSCs are not yet fully understood and their similarity to human embryonic stem cells (hESCs) is still unresolved. Variations among iPSCs, resulting from their original tissue or cell source, and from the experimental protocols used for their derivation, significantly affect epigenetic properties and differentiation potential. Here we review the potential of iPSCs for regenerative and personalized medicine, and assess their expression pattern, epigenetic memory and differentiation capabilities in relation to their parental tissue source. We also summarize the patient-specific iPSCs that have been derived for applications in biological research and drug discovery; and review risks that must be overcome in order to use iPSC technology for clinical applications.

  9. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  10. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County.

    Science.gov (United States)

    Wang, Long; Wei, Jiahua; Huang, Yuefei; Wang, Guangqian; Maqsood, Imran

    2011-07-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  12. Systems biology derived source-sink mechanism of BMP gradient formation.

    Science.gov (United States)

    Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei; Umulis, David; Mullins, Mary C

    2017-08-09

    A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development.

  13. Hamiltonian derivation of the nonhydrostatic pressure-coordinate model

    Science.gov (United States)

    Salmon, Rick; Smith, Leslie M.

    1994-07-01

    In 1989, the Miller-Pearce (MP) model for nonhydrostatic fluid motion governed by equations written in pressure coordinates was extended by removing the prescribed reference temperature, T(sub s)(p), while retaining the conservation laws and other desirable properties. It was speculated that this extension of the MP model had a Hamiltonian structure and that a slick derivation of the Ertel property could be constructed if the relevant Hamiltonian were known. In this note, the extended equations are derived using Hamilton's principle. The potential vorticity law arises from the usual particle-relabeling symmetry of the Lagrangian, and even the absence of sound waves is anticipated from the fact that the pressure inside the free energy G(p, theta) in the derived equation is hydrostatic and thus G is insensitive to local pressure fluctuations. The model extension is analogous to the semigeostrophic equations for nearly geostrophic flow, which do not incorporate a prescribed reference state, while the earlier MP model is analogous to the quasigeostrophic equations, which become highly inaccurate when the flow wanders from a prescribed state with nearly flat isothermal surfaces.

  14. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Bone marrow-derived versus parenchymal sources of inducible nitric oxide synthase in experimental autoimmune encephalomyelitis

    DEFF Research Database (Denmark)

    Zehntner, Simone P; Bourbonniere, Lyne; Hassan-Zahraee, Mina

    2004-01-01

    . These discrepancies may reflect balance between immunoregulatory and neurocytopathologic roles for NO. We investigated selective effects of bone marrow-derived versus CNS parenchymal sources of iNOS in EAE in chimeric mice. Chimeras that selectively expressed or ablated iNOS in leukocytes both showed significant...

  16. Modeling neurodegenerative diseases with patient-derived induced pluripotent cells

    DEFF Research Database (Denmark)

    Poon, Anna; Zhang, Yu; Chandrasekaran, Abinaya

    2017-01-01

    patient-specific induced pluripotent stem cells (iPSCs) and isogenic controls generated using CRISPR-Cas9 mediated genome editing. The iPSCs are self-renewable and capable of being differentiated into the cell types affected by the diseases. These in vitro models based on patient-derived iPSCs provide...... the possibilities of generating three-dimensional (3D) models using the iPSCs-derived cells and compare their advantages and disadvantages to conventional two-dimensional (2D) models....

  17. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  18. Remarks on the microscopic derivation of the collective model

    International Nuclear Information System (INIS)

    Toyoda, T.; Wildermuth, K.

    1984-01-01

    The rotational part of the phenomenological collective model of Bohr and Mottelson and others is derived microscopically, starting with the Schrodinger equation written in projection form and introducing a new set of 'relative Euler angles'. In order to derive the local Schrodinger equation of the collective model, it is assumed that the intrinsic wave functions give strong peaking properties to the overlapping kernels

  19. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  20. Hamiltonian derivation of a gyrofluid model for collisionless magnetic reconnection

    International Nuclear Information System (INIS)

    Tassi, E

    2014-01-01

    We consider a simple electromagnetic gyrokinetic model for collisionless plasmas and show that it possesses a Hamiltonian structure. Subsequently, from this model we derive a two-moment gyrofluid model by means of a procedure which guarantees that the resulting gyrofluid model is also Hamiltonian. The first step in the derivation consists of imposing a generic fluid closure in the Poisson bracket of the gyrokinetic model, after expressing such bracket in terms of the gyrofluid moments. The constraint of the Jacobi identity, which every Poisson bracket has to satisfy, selects then what closures can lead to a Hamiltonian gyrofluid system. For the case at hand, it turns out that the only closures (not involving integro/differential operators or an explicit dependence on the spatial coordinates) that lead to a valid Poisson bracket are those for which the second order parallel moment, independently for each species, is proportional to the zero order moment. In particular, if one chooses an isothermal closure based on the equilibrium temperatures and derives accordingly the Hamiltonian of the system from the Hamiltonian of the parent gyrokinetic model, one recovers a known Hamiltonian gyrofluid model for collisionless reconnection. The proposed procedure, in addition to yield a gyrofluid model which automatically conserves the total energy, provides also, through the resulting Poisson bracket, a way to derive further conservation laws of the gyrofluid model, associated with the so called Casimir invariants. We show that a relation exists between Casimir invariants of the gyrofluid model and those of the gyrokinetic parent model. The application of such Hamiltonian derivation procedure to this two-moment gyrofluid model is a first step toward its application to more realistic, higher-order fluid or gyrofluid models for tokamaks. It also extends to the electromagnetic gyrokinetic case, recent applications of the same procedure to Vlasov and drift- kinetic systems

  1. 26 CFR 1.863-8 - Source of income derived from space and ocean activity under section 863(d).

    Science.gov (United States)

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulations Applicable to Taxable... from sources without the United States to the extent the income, based on all the facts and... income derived by a CFC is income from sources without the United States to the extent the income, based...

  2. Delineating sources of groundwater recharge in an arsenic-affected Holocene aquifer in Cambodia using stable isotope-based mixing models

    Science.gov (United States)

    Richards, Laura A.; Magnone, Daniel; Boyce, Adrian J.; Casanueva-Marenco, Maria J.; van Dongen, Bart E.; Ballentine, Christopher J.; Polya, David A.

    2018-02-01

    Chronic exposure to arsenic (As) through the consumption of contaminated groundwaters is a major threat to public health in South and Southeast Asia. The source of As-affected groundwaters is important to the fundamental understanding of the controls on As mobilization and subsequent transport throughout shallow aquifers. Using the stable isotopes of hydrogen and oxygen, the source of groundwater and the interactions between various water bodies were investigated in Cambodia's Kandal Province, an area which is heavily affected by As and typical of many circum-Himalayan shallow aquifers. Two-point mixing models based on δD and δ18O allowed the relative extent of evaporation of groundwater sources to be estimated and allowed various water bodies to be broadly distinguished within the aquifer system. Model limitations are discussed, including the spatial and temporal variation in end member compositions. The conservative tracer Cl/Br is used to further discriminate between groundwater bodies. The stable isotopic signatures of groundwaters containing high As and/or high dissolved organic carbon plot both near the local meteoric water line and near more evaporative lines. The varying degrees of evaporation of high As groundwater sources are indicative of differing recharge contributions (and thus indirectly inferred associated organic matter contributions). The presence of high As groundwaters with recharge derived from both local precipitation and relatively evaporated surface water sources, such as ponds or flooded wetlands, are consistent with (but do not provide direct evidence for) models of a potential dual role of surface-derived and sedimentary organic matter in As mobilization.

  3. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  4. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    Purpose: To introduce and validate a kilovoltage (kV) x-ray source model and characterization method to compute absorbed dose accrued from kV x-rays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV x-ray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by in-air relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis half-value layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} on-board-imager (OBI{sup ®}) for four default cone-beam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lung-equivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depth-dose curves is ⩽2% in the homogeneous block phantom and ⩽2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lung-equivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lung-equivalent materials were higher than their

  5. Development of High-Resolution Dynamic Dust Source Function - A Case Study with a Strong Dust Storm in a Regional Model

    Science.gov (United States)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2017-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  6. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model.

    Science.gov (United States)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul

    2017-06-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  7. Infrapatellar Fat Pad: An Alternative Source of Adipose-Derived Mesenchymal Stem Cells

    Directory of Open Access Journals (Sweden)

    P. Tangchitphisut

    2016-01-01

    Full Text Available Introduction. The Infrapatellar fat pad (IPFP represents an emerging alternative source of adipose-derived mesenchymal stem cells (ASCs. We compared the characteristics and differentiation capacity of ASCs isolated from IPFP and SC. Materials and Methods. ASCs were harvested from either IPFP or SC. IPFPs were collected from patients undergoing total knee arthroplasty (TKA, whereas subcutaneous tissues were collected from patients undergoing lipoaspiration. Immunophenotypes of surface antigens were evaluated. Their ability to form colony-forming units (CFUs and their differentiation potential were determined. The ASCs karyotype was evaluated. Results. There was no difference in the number of CFUs and size of CFUs between IPFP and SC sources. ASCs isolated from both sources had a normal karyotype. The mesenchymal stem cells (MSCs markers on flow cytometry was equivalent. IPFP-ASCs demonstrated significantly higher expression of SOX-9 and RUNX-2 over ASCs isolated from SC (6.19 ± 5.56-, 0.47 ± 0.62-fold; p value = 0.047, and 17.33 ± 10.80-, 1.56 ± 1.31-fold; p value = 0.030, resp.. Discussion and Conclusion. CFU assay of IPFP-ASCs and SC-ASCs harvested by lipoaspiration technique was equivalent. The expression of key chondrogenic and osteogenic genes was increased in cells isolated from IPFP. IPFP should be considered a high quality alternative source of ASCs.

  8. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2011-01-01

    and discussed. Experimental results are provided to validate the performance and robustness of the VSIs functionality during Islanded and grid-connected operations, allowing a seamless transition between these modes through control hierarchies by regulating frequency and voltage, main-grid interactivity......Power electronics based microgrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models...

  9. The Arbitrage Pricing Model: A Pedagogic Derivation and a Spreadsheet-Based Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2016-05-01

    Full Text Available This paper derives, from a pedagogic perspective, the Arbitrage Pricing Model, which is an important asset pricing model in modern finance. The derivation is based on the idea that, if a self-financed investment has no risk exposures, the payoff from the investment can only be zero. Microsoft Excel plays an important pedagogic role in this paper. The Excel illustration not only helps students recognize more fully the various nuances in the model derivation, but also serves as a good starting point for students to explore on their own the relevance of the noise issue in the model derivation.

  10. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    Science.gov (United States)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  11. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  12. Wavefield dependency on virtual shifts in the source location

    KAUST Repository

    Alkhalifah, Tariq

    2011-01-01

    shape) to lateral perturbations in the source location depends explicitly on lateral derivatives of the velocity field. For velocity models that include lateral velocity discontinuities this is problematic as such derivatives in their classical

  13. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  14. Comparison of human adipose-derived stem cells and bone marrow-derived stem cells in a myocardial infarction model

    DEFF Research Database (Denmark)

    Rasmussen, Jeppe; Frøbert, Ole; Holst-Hansen, Claus

    2014-01-01

    Background: Treatment of myocardial infarction with bone marrow-derived mesenchymal stem cells and recently also adipose-derived stem cells has shown promising results. In contrast to clinical trials and their use of autologous bone marrow-derived cells from the ischemic patient, the animal...... myocardial infarction models are often using young donors and young, often immune-compromised, recipient animals. Our objective was to compare bone marrow-derived mesenchymal stem cells with adipose-derived stem cells from an elderly ischemic patient in the treatment of myocardial infarction, using a fully...... grown non-immunecompromised rat model. Methods: Mesenchymal stem cells were isolated from adipose tissue and bone marrow and compared with respect to surface markers and proliferative capability. To compare the regenerative potential of the two stem cell populations, male Sprague-Dawley rats were...

  15. Patient-derived xenograft models to improve targeted therapy in epithelial ovarian cancer treatment

    Directory of Open Access Journals (Sweden)

    Clare eScott

    2013-12-01

    Full Text Available Despite increasing evidence that precision therapy targeted to the molecular drivers of a cancer has the potential to improve clinical outcomes, high-grade epithelial ovarian cancer patients are currently treated without consideration of molecular phenotype, and predictive biomarkers that could better inform treatment remain unknown. Delivery of precision therapy requires improved integration of laboratory-based models and cutting-edge clinical research, with pre-clinical models predicting patient subsets that will benefit from a particular targeted therapeutic. Patient-derived xenografts (PDX are renewable tumor models engrafted in mice, generated from fresh human tumors without prior in vitro exposure. PDX models allow an invaluable assessment of tumor evolution and adaptive response to therapy.PDX models have been applied to preclinical drug testing and biomarker identification in a number of cancers including ovarian, pancreatic, breast and prostate cancers. These models have been shown to be biologically stable and accurately reflect the patient tumor with regards to histopathology, gene expression, genetic mutations and therapeutic response. However, pre-clinical analyses of molecularly annotated PDX models derived from high-grade serous ovarian cancer (HG-SOC remain limited. In vivo response to conventional and/or targeted therapeutics has only been described for very small numbers of individual HG-SOC PDX in conjunction with sparse molecular annotation and patient outcome data. Recently, two consecutive panels of epithelial ovarian cancer PDX correlate in vivo platinum response with molecular aberrations and source patient clinical outcomes. These studies underpin the value of PDX models to better direct chemotherapy and predict response to targeted therapy. Tumor heterogeneity, before and following treatment, as well as the importance of multiple molecular aberrations per individual tumor underscore some of the important issues

  16. Wavefield dependency on virtual shifts in the source location

    KAUST Repository

    Alkhalifah, Tariq

    2011-02-14

    The wavefield dependence on a virtual shift in the source location can provide information helpful in velocity estimation and interpolation. However, the second-order partial differential equation (PDE) that relates changes in the wavefield form (or shape) to lateral perturbations in the source location depends explicitly on lateral derivatives of the velocity field. For velocity models that include lateral velocity discontinuities this is problematic as such derivatives in their classical definition do not exist. As a result, I derive perturbation partial differential wave equations that are independent of direct velocity derivatives and thus, provide possibilities for wavefield shape extrapolation in complex media. These PDEs have the same structure as the wave equation with a source function that depends on the background (original source) wavefield. The solutions of the perturbation equations provide the coefficients of a Taylor\\'s series type expansion for the wavefield. The new formulas introduce changes to the background wavefield only in the presence of lateral velocity variation or in general terms velocity variations in the perturbation direction. The accuracy of the representation, as demonstrated on the Marmousi model, is generally good. © 2011 European Association of Geoscientists & Engineers.

  17. Using Dual Isotopes and a Bayesian Isotope Mixing Model to Evaluate Nitrate Sources of Surface Water in a Drinking Water Source Watershed, East China

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2016-08-01

    Full Text Available A high concentration of nitrate (NO3− in surface water threatens aquatic systems and human health. Revealing nitrate characteristics and identifying its sources are fundamental to making effective water management strategies. However, nitrate sources in multi-tributaries and mix land use watersheds remain unclear. In this study, based on 20 surface water sampling sites for more than two years’ monitoring from April 2012 to December 2014, water chemical and dual isotopic approaches (δ15N-NO3− and δ18O-NO3− were integrated for the first time to evaluate nitrate characteristics and sources in the Huashan watershed, Jianghuai hilly region, China. Nitrate-nitrogen concentrations (ranging from 0.02 to 8.57 mg/L were spatially heterogeneous that were influenced by hydrogeological and land use conditions. Proportional contributions of five potential nitrate sources (i.e., precipitation; manure and sewage, M & S; soil nitrogen, NS; nitrate fertilizer; nitrate derived from ammonia fertilizer and rainfall were estimated by using a Bayesian isotope mixing model. The results showed that nitrate sources contributions varied significantly among different rainfall conditions and land use types. As for the whole watershed, M & S (manure and sewage and NS (soil nitrogen were major nitrate sources in both wet and dry seasons (from 28% to 36% for manure and sewage and from 24% to 27% for soil nitrogen, respectively. Overall, combining a dual isotopes method with a Bayesian isotope mixing model offered a useful and practical way to qualitatively analyze nitrate sources and transformations as well as quantitatively estimate the contributions of potential nitrate sources in drinking water source watersheds, Jianghuai hilly region, eastern China.

  18. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  19. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  20. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    Science.gov (United States)

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  1. A fractal derivative constitutive model for three stages in granite creep

    Directory of Open Access Journals (Sweden)

    R. Wang

    Full Text Available In this paper, by replacing the Newtonian dashpot with the fractal dashpot and considering damage effect, a new constitutive model is proposed in terms of time fractal derivative to describe the full creep regions of granite. The analytic solutions of the fractal derivative creep constitutive equation are derived via scaling transform. The conventional triaxial compression creep tests are performed on MTS 815 rock mechanics test system to verify the efficiency of the new model. The granite specimen is taken from Beishan site, the most potential area for the China’s high-level radioactive waste repository. It is shown that the proposed fractal model can characterize the creep behavior of granite especially in accelerating stage which the classical models cannot predict. The parametric sensitivity analysis is also conducted to investigate the effects of model parameters on the creep strain of granite. Keywords: Beishan granite, Fractal derivative, Damage evolution, Scaling transformation

  2. Unique effects and moderators of effects of sources on self-efficacy: A model-based meta-analysis.

    Science.gov (United States)

    Byars-Winston, Angela; Diestelmann, Jacob; Savoy, Julia N; Hoyt, William T

    2017-11-01

    Self-efficacy beliefs are strong predictors of academic pursuits, performance, and persistence, and in theory are developed and maintained by 4 classes of experiences Bandura (1986) referred to as sources: performance accomplishments (PA), vicarious learning (VL), social persuasion (SP), and affective arousal (AA). The effects of sources on self-efficacy vary by performance domain and individual difference factors. In this meta-analysis (k = 61 studies of academic self-efficacy; N = 8,965), we employed B. J. Becker's (2009) model-based approach to examine cumulative effects of the sources as a set and unique effects of each source, controlling for the others. Following Becker's recommendations, we used available data to create a correlation matrix for the 4 sources and self-efficacy, then used these meta-analytically derived correlations to test our path model. We further examined moderation of these associations by subject area (STEM vs. non-STEM), grade, sex, and ethnicity. PA showed by far the strongest unique association with self-efficacy beliefs. Subject area was a significant moderator, with sources collectively predicting self-efficacy more strongly in non-STEM (k = 14) compared with STEM (k = 47) subjects (R2 = .37 and .22, respectively). Within studies of STEM subjects, grade level was a significant moderator of the coefficients in our path model, as were 2 continuous study characteristics (percent non-White and percent female). Practical implications of the findings and future research directions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  4. Non-Fourier conduction model with thermal source term of ultra short high power pulsed laser ablation and temperature evolvement before melting

    International Nuclear Information System (INIS)

    Zhang Duanming; Li, Li; Li Zhihua; Guan Li; Tan Xinyu

    2005-01-01

    A non-Fourier conduction model with heat source term is presented to study the target temperature evolvement when the target is radiated by high power (the laser intensity is above 10 9 w/cm 2 ) and ultra short (the pulse width is less than 150 ps) pulsed laser. By Laplace transform, the analytical expression of the space- and time-dependence of temperature is derived. Then as an example of aluminum target, the target temperature evolvement is simulated. Compared with the results of Fourier conduction model and non-Fourier model without heat source term, it is found that the effect of non-Fourier conduction is notable and the heat source plays an important role during non-Fourier conduction which makes surface temperature ascending quickly with time. Meanwhile, the corresponding physical mechanism is analyzed theoretically

  5. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  6. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  7. Some remarks on the small-distance derivative model

    International Nuclear Information System (INIS)

    Jannussis, A.

    1985-01-01

    In the present work the new expressions of the derivatives for small distance are investigated according to Gonzales-Diaz model. This model is noncanonical, is a particular case of the Lie-admissible formulation and has applications for distance and time scales comparable with the Planck dimensions

  8. Induced pluripotent stem cell-derived cardiomyocytes for cardiovascular disease modeling and drug screening.

    Science.gov (United States)

    Sharma, Arun; Wu, Joseph C; Wu, Sean M

    2013-12-24

    Human induced pluripotent stem cells (hiPSCs) have emerged as a novel tool for drug discovery and therapy in cardiovascular medicine. hiPSCs are functionally similar to human embryonic stem cells (hESCs) and can be derived autologously without the ethical challenges associated with hESCs. Given the limited regenerative capacity of the human heart following myocardial injury, cardiomyocytes derived from hiPSCs (hiPSC-CMs) have garnered significant attention from basic and translational scientists as a promising cell source for replacement therapy. However, ongoing issues such as cell immaturity, scale of production, inter-line variability, and cell purity will need to be resolved before human clinical trials can begin. Meanwhile, the use of hiPSCs to explore cellular mechanisms of cardiovascular diseases in vitro has proven to be extremely valuable. For example, hiPSC-CMs have been shown to recapitulate disease phenotypes from patients with monogenic cardiovascular disorders. Furthermore, patient-derived hiPSC-CMs are now providing new insights regarding drug efficacy and toxicity. This review will highlight recent advances in utilizing hiPSC-CMs for cardiac disease modeling in vitro and as a platform for drug validation. The advantages and disadvantages of using hiPSC-CMs for drug screening purposes will be explored as well.

  9. On the equivalence between the thirring model and a derivative coupling model

    International Nuclear Information System (INIS)

    Gomes, M.; Silva, A.J. da.

    1986-07-01

    The equivalence between the Thirring model and the fermionic sector of the theory of a Dirac field interacting via derivate coupling with two boson fields is analysed. For a certain choice of the parameters the two models have the same fermionic Green functions. (Author) [pt

  10. Sources of mutagenic activity in urban fine particles

    International Nuclear Information System (INIS)

    Stevens, R.K.; Lewis, C.W.; Dzubay, T.G.; Cupitt, L.T.; Lewtas, J.

    1990-01-01

    Samples were collected during the winter of 1984-1985 in the cities of Albuquerque, NM and Raleigh NC as part of a US Environmental Protection Agency study to evaluate methods to determine the emission sources contributing to the mutagenic properties of extractable organic matter (EOM) present in fine particles. Data derived from the analysis of the composition of these fine particles served as input to a multi-linear regression (MLR) model used to calculate the relative contribution of wood burning and motor vehicle sources to mutagenic activity observed in the extractable organic matter. At both sites the mutagenic potency of EOM was found to be greater (3-5 times) for mobile sources when compared to wood smoke extractable organics. Carbon-14 measurements which give a direct determination of the amount of EOM that originated from wood burning were in close agreement with the source apportionment results derived from the MLR model

  11. Source apportionment of PM10 and PM2.5 in major urban Greek agglomerations using a hybrid source-receptor modeling process.

    Science.gov (United States)

    Argyropoulos, G; Samara, C; Diapouli, E; Eleftheriadis, K; Papaoikonomou, K; Kungolos, A

    2017-12-01

    A hybrid source-receptor modeling process was assembled, to apportion and infer source locations of PM 10 and PM 2.5 in three heavily-impacted urban areas of Greece, during the warm period of 2011, and the cold period of 2012. The assembled process involved application of an advanced computational procedure, the so-called Robotic Chemical Mass Balance (RCMB) model. Source locations were inferred using two well-established probability functions: (a) the Conditional Probability Function (CPF), to correlate the output of RCMB with local wind directional data, and (b) the Potential Source Contribution Function (PSCF), to correlate the output of RCMB with 72h air-mass back-trajectories, arriving at the receptor sites, during sampling. Regarding CPF, a higher-level conditional probability function was defined as well, from the common locus of CPF sectors derived for neighboring receptor sites. With respect to PSCF, a non-parametric bootstrapping method was applied to discriminate the statistically significant values. RCMB modeling showed that resuspended dust is actually one of the main barriers for attaining the European Union (EU) limit values in Mediterranean urban agglomerations, where the drier climate favors build-up. The shift in the energy mix of Greece (caused by the economic recession) was also evidenced, since biomass burning was found to contribute more significantly to the sampling sites belonging to the coldest climatic zone, particularly during the cold period. The CPF analysis showed that short-range transport of anthropogenic emissions from urban traffic to urban background sites was very likely to have occurred, within all the examined urban agglomerations. The PSCF analysis confirmed that long-range transport of primary and/or secondary aerosols may indeed be possible, even from distances over 1000km away from study areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    Directory of Open Access Journals (Sweden)

    P. Seibert

    2004-01-01

    Full Text Available The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.. The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  13. Using an Altimeter-Derived Internal Tide Model to Remove Tides from in Situ Data

    Science.gov (United States)

    Zaron, Edward D.; Ray, Richard D.

    2017-01-01

    Internal waves at tidal frequencies, i.e., the internal tides, are a prominent source of variability in the ocean associated with significant vertical isopycnal displacements and currents. Because the isopycnal displacements are caused by ageostrophic dynamics, they contribute uncertainty to geostrophic transport inferred from vertical profiles in the ocean. Here it is demonstrated that a newly developed model of the main semidiurnal (M2) internal tide derived from satellite altimetry may be used to partially remove the tide from vertical profile data, as measured by the reduction of steric height variance inferred from the profiles. It is further demonstrated that the internal tide model can account for a component of the near-surface velocity as measured by drogued drifters. These comparisons represent a validation of the internal tide model using independent data and highlight its potential use in removing internal tide signals from in situ observations.

  14. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  15. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  16. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County

    International Nuclear Information System (INIS)

    Wang Long; Wei Jiahua; Huang Yuefei; Wang Guangqian; Maqsood, Imran

    2011-01-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. - Highlights: → An improved urban NPS model was developed. → It performs well in areas where storm events have great temporal variation. → Threshold of total runoff volume for ignoring residual pollutant was determined. - An improved urban NPS model was developed. Threshold of total runoff volume for ignoring residual pollutant was determined.

  17. Passive Detection of Narrowband Sources Using a Sensor Array

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Candy, J V; Guidry, B L

    2007-10-24

    In this report we derive a model for a highly scattering medium, implemented as a set of MATLAB functions. This model is used to analyze an approach for using time-reversal to enhance the detection of a single frequency source in a highly scattering medium. The basic approach is to apply the singular value decomposition to the multistatic response matrix for a time-reversal array system. We then use the array in a purely passive mode, measuring the response to the presence of a source. The measured response is projected onto the singular vectors, creating a time-reversal pseudo-spectrum. We can then apply standard detection techniques to the pseudo-spectrum to determine the presence of a source. If the source is close to a particular scatterer in the medium, then we would expect an enhancement of the inner product between the array response to the source with the singular vector associated with that scatterer. In this note we begin by deriving the Foldy-Lax model of a highly scattering medium, calculate both the field emitted by the source and the multistatic response matrix of a time-reversal array system in the medium, then describe the initial analysis approach.

  18. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  19. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  20. Microscopic Derivation of the Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Frank, Rupert; Hainzl, Christian; Seiringer, Robert

    2014-01-01

    We present a summary of our recent rigorous derivation of the celebrated Ginzburg-Landau (GL) theory, starting from the microscopic Bardeen-Cooper-Schrieffer (BCS) model. Close to the critical temperature, GL arises as an effective theory on the macroscopic scale. The relevant scaling limit...

  1. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  2. Bioaccumulation of hydrocarbons derived from terrestrial and anthropogenic sources in the Asian clam, Potamocorbula amurensis, in San Francisco Bay estuary

    Science.gov (United States)

    Pereira, Wilfred E.; Hostettler, Frances D.; Rapp, John B.

    1992-01-01

    An assessment was made in Suisun Bay, California, of the distributions of hydrocarbons in estuarine bed and suspended sediments and in the recently introduced asian clam, Potamocorbula amurensis. Sediments and clams were contaminated with hydrocarbons derived from petrogenic and pyrogenic sources. Distributions of alkanes and of hopane and sterane biomarkers in sediments and clams were similar, indicating that petroleum hydrocarbons associated with sediments are bioavailable to Potamocorbula amurensis. Polycyclic aromatic hydrocarbons in the sediments and clams were derived mainly from combustion sources. Potamocorbula amurensis is therefore a useful bioindicator of hydrocarbon contamination, and may be used as a biomonitor of hydrocarbon pollution in San Francisco Bay.

  3. Unsteady Vibration Aerodynamic Modeling and Evaluation of Dynamic Derivatives Using Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2015-01-01

    Full Text Available Unsteady aerodynamic system modeling is widely used to solve the dynamic stability problems encountering aircraft design. In this paper, single degree-of-freedom (SDF vibration model and forced simple harmonic motion (SHM model for dynamic derivative prediction are developed on the basis of modified Etkin model. In the light of the characteristics of SDF time domain solution, the free vibration identification methods for dynamic stability parameters are extended and applied to the time domain numerical simulation of blunted cone calibration model examples. The dynamic stability parameters by numerical identification are no more than 0.15% deviated from those by experimental simulation, confirming the correctness of SDF vibration model. The acceleration derivatives, rotary derivatives, and combination derivatives of Army-Navy Spinner Rocket are numerically identified by using unsteady N-S equation and solving different SHV patterns. Comparison with the experimental result of Army Ballistic Research Laboratories confirmed the correctness of the SHV model and dynamic derivative identification. The calculation result of forced SHM is better than that by the slender body theory of engineering approximation. SDF vibration model and SHM model for dynamic stability parameters provide a solution to the dynamic stability problem encountering aircraft design.

  4. Source Water Protection Contaminant Sources

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Simplified aggregation of potential contaminant sources used for Source Water Assessment and Protection. The data is derived from IDNR, IDALS, and US EPA program...

  5. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  6. On a derivation of the Salam-Weinberg model

    International Nuclear Information System (INIS)

    Squires, E.J.

    1979-01-01

    It is shown how the graded Lie-algebra structure of a recent derivation of the Salam-Weinberg model might arise from the form of allowed transformations on the lepton lagrangian in a 6-dimensional space. The possibility that the model might allow two identically coupled leptonic sectors, and others in which the chiralites are reversed, are discussed. (Auth.)

  7. One loop beta functions and fixed points in higher derivative sigma models

    International Nuclear Information System (INIS)

    Percacci, Roberto; Zanusso, Omar

    2010-01-01

    We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.

  8. Modeling Aerobic Carbon Source Degradation Processes using Titrimetric Data and Combined Respirometric-Titrimetric Data: Structural and Practical Identifiability

    DEFF Research Database (Denmark)

    Gernaey, Krist; Petersen, B.; Dochain, D.

    2002-01-01

    The structural and practical identifiability of a model for description of respirometric-titrimetric data derived from aerobic batch substrate degradation experiments of a CxHyOz carbon source with activated sludge was evaluated. The model processes needed to describe titrimetric data included su...... the initial substrate concentration S-S(O) is known. The values found correspond to values reported in literature, but, interestingly, also seem able to reflect the occurrence of storage processes when pulses of acetate and dextrose are added. (C) 2002 Wiley Periodicals, Inc....

  9. Probabilistic blind deconvolution of non-stationary sources

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals...

  10. Deriving consumer-facing disease concepts for family health histories using multi-source sampling.

    Science.gov (United States)

    Hulse, Nathan C; Wood, Grant M; Haug, Peter J; Williams, Marc S

    2010-10-01

    The family health history has long been recognized as an effective way of understanding individuals' susceptibility to familial disease; yet electronic tools to support the capture and use of these data have been characterized as inadequate. As part of an ongoing effort to build patient-facing tools for entering detailed family health histories, we have compiled a set of concepts specific to familial disease using multi-source sampling. These concepts were abstracted by analyzing family health history data patterns in our enterprise data warehouse, collection patterns of consumer personal health records, analyses from the local state health department, a healthcare data dictionary, and concepts derived from genetic-oriented consumer education materials. Collectively, these sources yielded a set of more than 500 unique disease concepts, represented by more than 2500 synonyms for supporting patients in entering coded family health histories. We expect that these concepts will be useful in providing meaningful data and education resources for patients and providers alike.

  11. Sources of present Chernobyl-derived caesium concentrations in surface air and deposition samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Rosner, G.; Winkler, R.; Gesellschaft fuer Strahlen- und Umweltforschung mbH Muenchen, Neuherberg

    1992-01-01

    The sources of Chernobyl-derived caesium concentrations in air and deposition samples collected from mid-1986 to end-1990 at Munich- Neuherberg, Germany, were investigated. Local resuspension has been found to be the main source. By comparison with deposition data from other locations it is estimated that within a range from 20 Bq m -2 to 60 kBq m -2 of initially deposited 137 Cs activity ∼2% is re-deposited by the process of local resuspension in Austria, Germany, Japan and United Kingdom, while significantly higher total resuspension is to be expected for Denmark and Finland. Stratospheric contribution to present concentrations is shown to be negligible. This is confirmed by cross correlation analysis between the time series of 137 Cs in air and precipitation before and after the Chernobyl accident and the respective time series of cosmogenic 7 Be, which is an indicator of stratospheric input. Seasonal variations of caesium concentrations with maxima in winter were observed. (author). 32 refs.; 5 figs.; 1 tab

  12. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  13. An equilibrium pricing model for weather derivatives in a multi-commodity setting

    International Nuclear Information System (INIS)

    Lee, Yongheon; Oren, Shmuel S.

    2009-01-01

    Many industries are exposed to weather risk. Weather derivatives can play a key role in hedging and diversifying such risk because the uncertainty in a company's profit function can be correlated to weather condition which affects diverse industry sectors differently. Unfortunately the weather derivatives market is a classical example of an incomplete market that is not amenable to standard methodologies used for derivative pricing in complete markets. In this paper, we develop an equilibrium pricing model for weather derivatives in a multi-commodity setting. The model is constructed in the context of a stylized economy where agents optimize their hedging portfolios which include weather derivatives that are issued in a fixed quantity by a financial underwriter. The supply and demand resulting from hedging activities and the supply by the underwriter are combined in an equilibrium pricing model under the assumption that all agents maximize some risk averse utility function. We analyze the gains due to the inclusion of weather derivatives in hedging portfolios and examine the components of that gain attributable to hedging and to risk sharing. (author)

  14. New Fokker-Planck derivation of heavy gas models for neutron thermalization

    International Nuclear Information System (INIS)

    Larsen, E.W.; Williams, M.M.R.

    1990-01-01

    This paper is concerned with the derivation of new generalized heavy gas models for the infinite medium neutron energy spectrum equation. Our approach is general and can be used to derive improved Fokker-Planck approximations for other types of kinetic equations. In this paper we obtain two distinct heavy gas models, together with estimates for the corresponding errors. The models are shown in a special case to reduce to modified heavy gas models proposed earlier by Corngold (1962). The error estimates show that both of the new models should be more accurate than Corngold's modified heavy gas model, and that the first of the two new models should generally be more accurate than the second. (author)

  15. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  16. Derivative Geometric Modeling of Basic Rotational Solids on CATIA

    Institute of Scientific and Technical Information of China (English)

    MENG Xiang-bao; PAN Zi-jian; ZHU Yu-xiang; LI Jun

    2011-01-01

    Hybrid models derived from rotational solids like cylinders, cones and spheres were implemented on CATIA software. Firstly, make the isosceles triangular prism, cuboid, cylinder, cone, sphere, and the prism with tangent conic and curved triangle ends, the cuboid with tangent cylindrical and curved rectangle ends, the cylinder with tangent spherical and curved circular ends as the basic Boolean deference units to the primary cylinders, cones and spheres on symmetrical and some critical geometric conditions, forming a series of variant solid models. Secondly, make the deference units above as the basic union units to the main cylinders, cones, and spheres accordingly, forming another set of solid models. Thirdly, make the tangent ends of union units into oblique conic, cylindrical, or with revolved triangular pyramid, quarterly cylinder and annulus ends on sketch based features to the main cylinders, cones, and spheres repeatedly, thus forming still another set of solid models. It is expected that these derivative models be beneficial both in the structure design, hybrid modeling, and finite element analysis of engineering components and in comprehensive training of spatial configuration of engineering graphics.

  17. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  18. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  19. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-09-06

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  20. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-01-01

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  1. Impact of an extended source in laser ablation using pulsed digital holographic interferometry and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Amer, E., E-mail: eynas.amer@ltu.se [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden); Gren, P.; Kaplan, A.F.H.; Sjoedahl, M. [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden)

    2009-08-15

    Pulsed digital holographic interferometry has been used to study the effect of the laser spot diameter on the shock wave generated in the ablation process of an Nd:YAG laser pulse on a Zn target under atmospheric pressure. For different laser spot diameters and time delays, the propagation of the expanding vapour and of the shock wave were recorded by intensity maps calculated using the recorded digital holograms. From the latter, the phase maps, the refractive index and the density field can be derived. A model was developed that approaches the density distribution, in particular the ellipsoidal expansion characteristics. The induced shock wave has an ellipsoid shape that approaches a sphere for decreasing spot diameter. The ellipsoidal shock waves have almost the same centre offset towards the laser beam and the same aspect ratio for different time steps. The model facilitates the derivation of the particle velocity field. The method provides valuable quantitative results that are discussed, in particular in comparison with the simpler point source explosion theory.

  2. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  3. Can pancreatic duct-derived progenitors be a source of islet regeneration?

    International Nuclear Information System (INIS)

    Xia, Bing; Zhan, Xiao-Rong; Yi, Ran; Yang, Baofeng

    2009-01-01

    The regenerative process of the pancreas is of interest because the main pathogenesis of diabetes mellitus is an inadequate number of insulin-producing β-cells. The functional mass of β-cells is decreased in type 1 diabetes, so replacing missing β-cells or triggering their regeneration may allow for improved type 1 diabetes treatment. Therefore, expansion of the β-cell mass from endogenous sources, either in vivo or in vitro, represents an area of increasing interest. The mechanism of islet regeneration remains poorly understood, but the identification of islet progenitor sources is critical for understanding β-cell regeneration. One potential source is the islet proper, via the dedifferentiation, proliferation, and redifferentiation of facultative progenitors residing within the islet. Neogenesis, or that the new pancreatic islets can derive from progenitor cells present within the ducts has been reported, but the existence and identity of the progenitor cells have been debated. In this review, we focus on pancreatic ductal cells, which are islet progenitors capable of differentiating into islet β-cells. Islet neogenesis, seen as budding of hormone-positive cells from the ductal epithelium, is considered to be one mechanism for normal islet growth after birth and in regeneration, and has suggested the presence of pancreatic stem cells. Numerous results support the neogenesis hypothesis, the evidence for the hypothesis in the adult comes primarily from morphological studies that have in common the production of damage to all or part of the pancreas, with consequent inflammation and repair. Although numerous studies support a ductal origin for new islets after birth, lineage-tracing experiments are considered the 'gold standard' of proof. Lineage-tracing experiments show that pancreatic duct cells act as progenitors, giving rise to new islets after birth and after injury. The identification of differentiated pancreatic ductal cells as an in vivo progenitor for

  4. Can pancreatic duct-derived progenitors be a source of islet regeneration?

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Bing [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Zhan, Xiao-Rong, E-mail: xiaorongzhan@sina.com [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Yi, Ran [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Yang, Baofeng [Department of Pharmacology, State Key Laboratory of Biomedicine and Pharmacology, Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China)

    2009-06-12

    The regenerative process of the pancreas is of interest because the main pathogenesis of diabetes mellitus is an inadequate number of insulin-producing {beta}-cells. The functional mass of {beta}-cells is decreased in type 1 diabetes, so replacing missing {beta}-cells or triggering their regeneration may allow for improved type 1 diabetes treatment. Therefore, expansion of the {beta}-cell mass from endogenous sources, either in vivo or in vitro, represents an area of increasing interest. The mechanism of islet regeneration remains poorly understood, but the identification of islet progenitor sources is critical for understanding {beta}-cell regeneration. One potential source is the islet proper, via the dedifferentiation, proliferation, and redifferentiation of facultative progenitors residing within the islet. Neogenesis, or that the new pancreatic islets can derive from progenitor cells present within the ducts has been reported, but the existence and identity of the progenitor cells have been debated. In this review, we focus on pancreatic ductal cells, which are islet progenitors capable of differentiating into islet {beta}-cells. Islet neogenesis, seen as budding of hormone-positive cells from the ductal epithelium, is considered to be one mechanism for normal islet growth after birth and in regeneration, and has suggested the presence of pancreatic stem cells. Numerous results support the neogenesis hypothesis, the evidence for the hypothesis in the adult comes primarily from morphological studies that have in common the production of damage to all or part of the pancreas, with consequent inflammation and repair. Although numerous studies support a ductal origin for new islets after birth, lineage-tracing experiments are considered the 'gold standard' of proof. Lineage-tracing experiments show that pancreatic duct cells act as progenitors, giving rise to new islets after birth and after injury. The identification of differentiated pancreatic ductal

  5. Source term derivation and radiological safety analysis for the TRICO II research reactor in Kinshasa

    International Nuclear Information System (INIS)

    Muswema, J.L.; Ekoko, G.B.; Lukanda, V.M.; Lobo, J.K.-K.; Darko, E.O.; Boafo, E.K.

    2015-01-01

    Highlights: • Atmospheric dispersion modeling for two credible accidents of the TRIGA Mark II research reactor in Kinshasa (TRICO II) was performed. • Radiological safety analysis after the postulated initiating events (PIE) was also carried out. • The Karlsruhe KORIGEN and the HotSpot Health Physics codes were used to achieve the objectives of this study. • All the values of effective dose obtained following the accident scenarios were below the regulatory limits for reactor staff members and the public, respectively. - Abstract: The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were performed based on two possible postulated accident scenarios. This derivation was made from an inventory of peak radioisotope activities released in the core by using the Karlsruhe version of isotope generation code KORIGEN. The atmospheric dispersion modeling was performed with HotSpot code, and its application yielded to radiation dose profile around the site using meteorological parameters specific to the area under study. The two accident scenarios were picked from possible accident analyses for TRIGA and TRIGA-fueled reactors, involving the case of destruction of the fuel element with highest activity release and a plane crash on the reactor building as the worst case scenario. Deterministic effects of these scenarios are used to update the Safety Analysis Report (SAR) of the reactor, and for its current version, these scenarios are not yet incorporated. Site-specific meteorological conditions were collected from two meteorological stations: one installed within the Atomic Energy Commission and another at the National Meteorological Agency (METTELSAT), which is not far from the site. Results show that in both accident scenarios, radiation doses remain within the limits, far below the recommended maximum effective

  6. Source term derivation and radiological safety analysis for the TRICO II research reactor in Kinshasa

    Energy Technology Data Exchange (ETDEWEB)

    Muswema, J.L., E-mail: jeremie.muswem@unikin.ac.cd [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Ekoko, G.B. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Lukanda, V.M. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Democratic Republic of the Congo' s General Atomic Energy Commission, P.O. Box AE1 (Congo, The Democratic Republic of the); Lobo, J.K.-K. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Darko, E.O. [Radiation Protection Institute, Ghana Atomic Energy Commission, P.O. Box LG 80, Legon, Accra (Ghana); Boafo, E.K. [University of Ontario Institute of Technology, 2000 Simcoe St. North, Oshawa, ONL1 H7K4 (Canada)

    2015-01-15

    Highlights: • Atmospheric dispersion modeling for two credible accidents of the TRIGA Mark II research reactor in Kinshasa (TRICO II) was performed. • Radiological safety analysis after the postulated initiating events (PIE) was also carried out. • The Karlsruhe KORIGEN and the HotSpot Health Physics codes were used to achieve the objectives of this study. • All the values of effective dose obtained following the accident scenarios were below the regulatory limits for reactor staff members and the public, respectively. - Abstract: The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were performed based on two possible postulated accident scenarios. This derivation was made from an inventory of peak radioisotope activities released in the core by using the Karlsruhe version of isotope generation code KORIGEN. The atmospheric dispersion modeling was performed with HotSpot code, and its application yielded to radiation dose profile around the site using meteorological parameters specific to the area under study. The two accident scenarios were picked from possible accident analyses for TRIGA and TRIGA-fueled reactors, involving the case of destruction of the fuel element with highest activity release and a plane crash on the reactor building as the worst case scenario. Deterministic effects of these scenarios are used to update the Safety Analysis Report (SAR) of the reactor, and for its current version, these scenarios are not yet incorporated. Site-specific meteorological conditions were collected from two meteorological stations: one installed within the Atomic Energy Commission and another at the National Meteorological Agency (METTELSAT), which is not far from the site. Results show that in both accident scenarios, radiation doses remain within the limits, far below the recommended maximum effective

  7. An analytical threshold voltage model for a short-channel dual-metal-gate (DMG) recessed-source/drain (Re-S/D) SOI MOSFET

    Science.gov (United States)

    Saramekala, G. K.; Santra, Abirmoya; Dubey, Sarvesh; Jit, Satyabrata; Tiwari, Pramod Kumar

    2013-08-01

    In this paper, an analytical short-channel threshold voltage model is presented for a dual-metal-gate (DMG) fully depleted recessed source/drain (Re-S/D) SOI MOSFET. For the first time, the advantages of recessed source/drain (Re-S/D) and of dual-metal-gate structure are incorporated simultaneously in a fully depleted SOI MOSFET. The analytical surface potential model at Si-channel/SiO2 interface and Si-channel/buried-oxide (BOX) interface have been developed by solving the 2-D Poisson’s equation in the channel region with appropriate boundary conditions assuming parabolic potential profile in the transverse direction of the channel. Thereupon, a threshold voltage model is derived from the minimum surface potential in the channel. The developed model is analyzed extensively for a variety of device parameters like the oxide and silicon channel thicknesses, thickness of source/drain extension in the BOX, control and screen gate length ratio. The validity of the present 2D analytical model is verified with ATLAS™, a 2D device simulator from SILVACO Inc.

  8. Induced pluripotent stem cells (iPSC)-derived retinal cells in disease modeling and regenerative medicine.

    Science.gov (United States)

    Rathod, Reena; Surendran, Harshini; Battu, Rajani; Desai, Jogin; Pal, Rajarshi

    2018-02-12

    Retinal degenerative disorders are a leading cause of the inherited, irreversible and incurable vision loss. While various rodent model systems have provided crucial information in this direction, lack of disease-relevant tissue availability and species-specific differences have proven to be a major roadblock. Human induced pluripotent stem cells (iPSC) have opened up a whole new avenue of possibilities not just in understanding the disease mechanism but also potential therapeutic approaches towards a cure. In this review, we have summarized recent advances in the methods of deriving retinal cell types from iPSCs which can serve as a renewable source of disease-relevant cell population for basic as well as translational studies. We also provide an overview of the ongoing efforts towards developing a suitable in vitro model for modeling retinal degenerative diseases. This basic understanding in turn has contributed to advances in translational goals such as drug screening and cell-replacement therapies. Furthermore we discuss gene editing approaches for autologous repair of genetic disorders and allogeneic transplantation of stem cell-based retinal derivatives for degenerative disorders with an ultimate goal to restore vision. It is pertinent to note however, that these exciting new developments throw up several challenges that need to be overcome before their full clinical potential can be realized. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Interactions of donor sources and media influence the histo-morphological quality of full-thickness skin models.

    Science.gov (United States)

    Lange, Julia; Weil, Frederik; Riegler, Christoph; Groeber, Florian; Rebhan, Silke; Kurdyn, Szymon; Alb, Miriam; Kneitz, Hermann; Gelbrich, Götz; Walles, Heike; Mielke, Stephan

    2016-10-01

    Human artificial skin models are increasingly employed as non-animal test platforms for research and medical purposes. However, the overall histopathological quality of such models may vary significantly. Therefore, the effects of manufacturing protocols and donor sources on the quality of skin models built-up from fibroblasts and keratinocytes derived from juvenile foreskins is studied. Histo-morphological parameters such as epidermal thickness, number of epidermal cell layers, dermal thickness, dermo-epidermal adhesion and absence of cellular nuclei in the corneal layer are obtained and scored accordingly. In total, 144 full-thickness skin models derived from 16 different donors, built-up in triplicates using three different culture conditions were successfully generated. In univariate analysis both media and donor age affected the quality of skin models significantly. Both parameters remained statistically significant in multivariate analyses. Performing general linear model analyses we could show that individual medium-donor-interactions influence the quality. These observations suggest that the optimal choice of media may differ from donor to donor and coincides with findings where significant inter-individual variations of growth rates in keratinocytes and fibroblasts have been described. Thus, the consideration of individual medium-donor-interactions may improve the overall quality of human organ models thereby forming a reproducible test platform for sophisticated clinical research. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  11. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  12. Response-surface models for deterministic effects of localized irradiation of the skin by discrete {beta}/{gamma} -emitting sources

    Energy Technology Data Exchange (ETDEWEB)

    Scott, B.R.

    1995-12-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete {Beta}- and {gamma}-emitting ({Beta}{gamma}E) sources (e.g., {Beta}{gamma}E hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot {Beta}{gamma}E particles are {sup 60}Co- or nuclear fuel-derived particles with diameters > 10 {mu}m and < 3 mm and contain at least 3.7 kBq (0.1 {mu}Ci) of radioactivity. For such {Beta}{gamma}E sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for evaluating the risk of deterministic effects of localized {Beta} irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete {Beta}{gamma}E sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to {Beta} radiation from {Beta}{gamma}E sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects.

  13. Response-surface models for deterministic effects of localized irradiation of the skin by discrete β/γ -emitting sources

    International Nuclear Information System (INIS)

    Scott, B.R.

    1995-01-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete Β- and γ-emitting (ΒγE) sources (e.g., ΒγE hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot ΒγE particles are 60 Co- or nuclear fuel-derived particles with diameters > 10 μm and < 3 mm and contain at least 3.7 kBq (0.1 μCi) of radioactivity. For such ΒγE sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for evaluating the risk of deterministic effects of localized Β irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete ΒγE sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to Β radiation from ΒγE sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects

  14. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  15. The Chandra Source Catalog: Spectral Properties

    Science.gov (United States)

    Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).

  16. Analytical Subthreshold Current and Subthreshold Swing Models for a Fully Depleted (FD) Recessed-Source/Drain (Re-S/D) SOI MOSFET with Back-Gate Control

    Science.gov (United States)

    Saramekala, Gopi Krishna; Tiwari, Pramod Kumar

    2017-08-01

    Two-dimensional (2D) analytical models for the subthreshold current and subthreshold swing of the back-gated fully depleted recessed-source/drain (Re-S/D) silicon-on-insulator (SOI) metal-oxide-semiconductor field-effect transistor (MOSFET) are presented. The surface potential is determined by solving the 2D Poisson equation in both channel and buried-oxide (BOX) regions, considering suitable boundary conditions. To derive closed-form expressions for the subthreshold characteristics, the virtual cathode potential expression has been derived in terms of the minimum of the front and back surface potentials. The effect of various device parameters such as gate oxide and Si film thicknesses, thickness of source/drain penetration into BOX, applied back-gate bias voltage, etc. on the subthreshold current and subthreshold swing has been analyzed. The validity of the proposed models is established using the Silvaco ATLAS™ 2D device simulator.

  17. Physics Mining of Multi-source Data Sets, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to implement novel physics mining algorithms with analytical capabilities to derive diagnostic and prognostic numerical models from multi-source...

  18. Sources of present Chernobyl-derived caesium concentrations in surface air and deposition samples

    Energy Technology Data Exchange (ETDEWEB)

    Hoetzl, H.; Rosner, G.; Winkler, R. (Gesellschaft fuer Strahlen-und Umweltforschung Munich, Neuherberg (Germany). Forschungszentrum fuer Umwelt und Gesundheit Gesellschaft fuer Strahlen- und Umweltforschung mbH Muenchen, Neuherberg (Germany). Inst. fuer Strahlenschutz)

    1992-06-01

    The sources of Chernobyl-derived caesium concentrations in air and deposition samples collected from mid-1986 to end-1990 at Munich- Neuherberg, Germany, were investigated. Local resuspension has been found to be the main source. By comparison with deposition data from other locations it is estimated that within a range from 20 Bq m[sup -2] to 60 kBq m[sup -2] of initially deposited [sup 137]Cs activity [approx]2% is re-deposited by the process of local resuspension in Austria, Germany, Japan and United Kingdom, while significantly higher total resuspension is to be expected for Denmark and Finland. Stratospheric contribution to present concentrations is shown to be negligible. This is confirmed by cross correlation analysis between the time series of [sup 137]Cs in air and precipitation before and after the Chernobyl accident and the respective time series of cosmogenic [sup 7]Be, which is an indicator of stratospheric input. Seasonal variations of caesium concentrations with maxima in winter were observed. (author). 32 refs.; 5 figs.; 1 tab.

  19. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  20. Added-value joint source modelling of seismic and geodetic data

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source

  1. On (in)stabilities of perturbations in mimetic models with higher derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yunlong; Shen, Liuyuan [Department of Physics, Nanjing University, Nanjing 210093 (China); Mou, Yicen; Li, Mingzhe, E-mail: zylakx@163.com, E-mail: sly12271103@163.com, E-mail: moinch@mail.ustc.edu.cn, E-mail: limz@ustc.edu.cn [Interdisciplinary Center for Theoretical Study, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-08-01

    Usually when applying the mimetic model to the early universe, higher derivative terms are needed to promote the mimetic field to be dynamical. However such models suffer from the ghost and/or the gradient instabilities and simple extensions cannot cure this pathology. We point out in this paper that it is possible to overcome this difficulty by considering the direct couplings of the higher derivatives of the mimetic field to the curvature of the spacetime.

  2. Assessing the impact of different sources of topographic data on 1-D hydraulic modelling of floods

    Science.gov (United States)

    Ali, A. Md; Solomatine, D. P.; Di Baldassarre, G.

    2015-01-01

    Topographic data, such as digital elevation models (DEMs), are essential input in flood inundation modelling. DEMs can be derived from several sources either through remote sensing techniques (spaceborne or airborne imagery) or from traditional methods (ground survey). The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), the Shuttle Radar Topography Mission (SRTM), the light detection and ranging (lidar), and topographic contour maps are some of the most commonly used sources of data for DEMs. These DEMs are characterized by different precision and accuracy. On the one hand, the spatial resolution of low-cost DEMs from satellite imagery, such as ASTER and SRTM, is rather coarse (around 30 to 90 m). On the other hand, the lidar technique is able to produce high-resolution DEMs (at around 1 m), but at a much higher cost. Lastly, contour mapping based on ground survey is time consuming, particularly for higher scales, and may not be possible for some remote areas. The use of these different sources of DEM obviously affects the results of flood inundation models. This paper shows and compares a number of 1-D hydraulic models developed using HEC-RAS as model code and the aforementioned sources of DEM as geometric input. To test model selection, the outcomes of the 1-D models were also compared, in terms of flood water levels, to the results of 2-D models (LISFLOOD-FP). The study was carried out on a reach of the Johor River, in Malaysia. The effect of the different sources of DEMs (and different resolutions) was investigated by considering the performance of the hydraulic models in simulating flood water levels as well as inundation maps. The outcomes of our study show that the use of different DEMs has serious implications to the results of hydraulic models. The outcomes also indicate that the loss of model accuracy due to re-sampling the highest resolution DEM (i.e. lidar 1 m) to lower resolution is much less than the loss of model accuracy due

  3. MATLAB-based algorithm to estimate depths of isolated thin dike-like sources using higher-order horizontal derivatives of magnetic anomalies.

    Science.gov (United States)

    Ekinci, Yunus Levent

    2016-01-01

    This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.

  4. Algorithm for Financial Derivatives Evaluation in a Generalized Multi-Heston Model

    Directory of Open Access Journals (Sweden)

    Dan Negura

    2013-02-01

    Full Text Available In this paper we show how could a financial derivative be estimated based on an assumed Multi-Heston model support.Keywords: Euler Maruyama discretization method, Monte Carlo simulation, Heston model, Double-Heston model, Multi-Heston model

  5. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  6. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  7. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  8. Procedure to derive analytical models for microwave noise performances of Si/SiGe:C and InP/InGaAs heterojunction bipolar transistors

    International Nuclear Information System (INIS)

    Ramirez-Garcia, E; Enciso-Aguilar, M A; Aniel, F P; Zerounian, N

    2013-01-01

    We present a useful procedure to derive simplified expressions to model the minimum noise factor and the equivalent noise resistance of Si/SiGe:C and InP/InGaAs heterojunction bipolar transistors (HBTs). An acceptable agreement between models and measurements at operation frequencies up to 18 GHz and at several bias points is demonstrated. The development procedure includes all the significant microwave noise sources of the HBTs. These relations should be useful to model F min and R n for state-of-the-art IV-IV and III–V HBTs. The method is the first step to derive noise analyses formulas valid for operation frequencies near the unitary current gain frequency (f T ); however, to achieve this goal a necessary condition is to have access to HFN measurements up to this frequency regime. (paper)

  9. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  10. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  11. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  12. Modeling Anti-HIV Activity of HEPT Derivatives Revisited. Multiregression Models Are Not Inferior Ones

    International Nuclear Information System (INIS)

    Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono

    2007-01-01

    Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters

  13. Investigation of hydraulic transmission noise sources

    Science.gov (United States)

    Klop, Richard J.

    Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.

  14. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  15. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  16. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  17. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  18. Modeling the Interest Rate Term Structure: Derivatives Contracts Dynamics and Evaluation

    Directory of Open Access Journals (Sweden)

    Pedro L. Valls Pereira

    2005-06-01

    Full Text Available This article deals with a model for the term structure of interest rates and the valuation of derivative contracts directly dependent on it. The work is of a theoretical nature and deals, exclusively, with continuous time models, making ample use of stochastic calculus results and presents original contributions that we consider relevant to the development of the fixed income market modeling. We develop a new multifactorial model of the term structure of interest rates. The model is based on the decomposition of the yield curve into the factors level, slope, curvature, and the treatment of their collective dynamics. We show that this model may be applied to serve various objectives: analysis of bond price dynamics, valuation of derivative contracts and also market risk management and formulation of operational strategies which is presented in another article.

  19. PDX-MI: Minimal Information for Patient-Derived Tumor Xenograft Models

    NARCIS (Netherlands)

    Meehan, Terrence F.; Conte, Nathalie; Goldstein, Theodore; Inghirami, Giorgio; Murakami, Mark A.; Brabetz, Sebastian; Gu, Zhiping; Wiser, Jeffrey A.; Dunn, Patrick; Begley, Dale A.; Krupke, Debra M.; Bertotti, Andrea; Bruna, Alejandra; Brush, Matthew H.; Byrne, Annette T.; Caldas, Carlos; Christie, Amanda L.; Clark, Dominic A.; Dowst, Heidi; Dry, Jonathan R.; Doroshow, James H.; Duchamp, Olivier; Evrard, Yvonne A.; Ferretti, Stephane; Frese, Kristopher K.; Goodwin, Neal C.; Greenawalt, Danielle; Haendel, Melissa A.; Hermans, Els; Houghton, Peter J.; Jonkers, Jos; Kemper, Kristel; Khor, Tin O.; Lewis, Michael T.; Lloyd, K. C. Kent; Mason, Jeremy; Medico, Enzo; Neuhauser, Steven B.; Olson, James M.; Peeper, Daniel S.; Rueda, Oscar M.; Seong, Je Kyung; Trusolino, Livio; Vinolo, Emilie; Wechsler-Reya, Robert J.; Weinstock, David M.; Welm, Alana; Weroha, S. John; Amant, Frédéric; Pfister, Stefan M.; Kool, Marcel; Parkinson, Helen; Butte, Atul J.; Bult, Carol J.

    2017-01-01

    Patient-derived tumor xenograft (PDX) mouse models have emerged as an important oncology research platform to study tumor evolution, mechanisms of drug response and resistance, and tailoring chemotherapeutic approaches for individual patients. The lack of robust standards for reporting on PDX models

  20. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  1. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    Science.gov (United States)

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  2. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  3. Algebraic Bethe ansatz for a quantum integrable derivative nonlinear Schroedinger model

    International Nuclear Information System (INIS)

    Basu-Mallick, B.; Bhattacharyya, Tanaya

    2002-01-01

    We find that the quantum monodromy matrix associated with a derivative nonlinear Schroedinger (DNLS) model exhibits U(2) or U(1,1) symmetry depending on the sign of the related coupling constant. By using a variant of quantum inverse scattering method which is directly applicable to field theoretical models, we derive all possible commutation relations among the operator valued elements of such monodromy matrix. Thus, we obtain the commutation relation between creation and annihilation operators of quasi-particles associated with DNLS model and find out the S-matrix for two-body scattering. We also observe that, for some special values of the coupling constant, there exists an upper bound on the number of quasi-particles which can form a soliton state for the quantum DNLS model

  4. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  5. Bioactive cembrane derivatives from the Indian Ocean soft coral, Sinularia kavarattiensis.

    Digital Repository Service at National Institute of Oceanography (India)

    Lillsunde, K.-E.; Festa, C.; Adel, H.; DeMarino, S.; Lombardi, V.; Tilvi, S.; Nawrot, D.A.; Zampella, A.; DeSouza, L.; DeAuria, M.V.; Tammela, P.

    Marine organisms and their metabolites represent a unique source of potential pharmaceutical substances. In this study, we examined marine-derived substances for their bioactive properties in a cell-based Chikungunya virus (CHIKV) replicon model...

  6. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    Noack, K.

    1981-01-01

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered [ru

  7. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  8. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    Science.gov (United States)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear

  9. A critical view on temperature modelling for application in weather derivatives markets

    International Nuclear Information System (INIS)

    Šaltytė Benth, Jūratė; Benth, Fred Espen

    2012-01-01

    In this paper we present a stochastic model for daily average temperature. The model contains seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. The model is estimated on daily average temperature records from Stockholm (Sweden). By comparing the proposed model with the popular model of Campbell and Diebold (2005), we point out some important issues to be addressed when modelling the temperature for application in weather derivatives market. - Highlights: ► We present a stochastic model for daily average temperature, containing seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. ► We compare the proposed model with the popular model of Campbell and Diebold (2005). ► Some important issues to be addressed when modelling the temperature for application in weather derivatives market are pointed out.

  10. A 1D ion species model for an RF driven negative ion source

    Science.gov (United States)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  11. Neonatal Transplantation Confers Maturation of PSC-Derived Cardiomyocytes Conducive to Modeling Cardiomyopathy

    Directory of Open Access Journals (Sweden)

    Gun-Sik Cho

    2017-01-01

    Full Text Available Summary: Pluripotent stem cells (PSCs offer unprecedented opportunities for disease modeling and personalized medicine. However, PSC-derived cells exhibit fetal-like characteristics and remain immature in a dish. This has emerged as a major obstacle for their application for late-onset diseases. We previously showed that there is a neonatal arrest of long-term cultured PSC-derived cardiomyocytes (PSC-CMs. Here, we demonstrate that PSC-CMs mature into adult CMs when transplanted into neonatal hearts. PSC-CMs became similar to adult CMs in morphology, structure, and function within a month of transplantation into rats. The similarity was further supported by single-cell RNA-sequencing analysis. Moreover, this in vivo maturation allowed patient-derived PSC-CMs to reveal the disease phenotype of arrhythmogenic right ventricular cardiomyopathy, which manifests predominantly in adults. This study lays a foundation for understanding human CM maturation and pathogenesis and can be instrumental in PSC-based modeling of adult heart diseases. : Pluripotent stem cell (PSC-derived cells remain fetal like, and this has become a major impediment to modeling adult diseases. Cho et al. find that PSC-derived cardiomyocytes mature into adult cardiomyocytes when transplanted into neonatal rat hearts. This method can serve as a tool to understand maturation and pathogenesis in human cardiomyocytes. Keywords: cardiomyocyte, maturation, iPS, cardiac progenitor, neonatal, disease modeling, cardiomyopathy, ARVC, T-tubule, calcium transient, sarcomere shortening

  12. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  13. Drainage Structure Datasets and Effects on LiDAR-Derived Surface Flow Modeling

    Directory of Open Access Journals (Sweden)

    Ruopu Li

    2013-12-01

    Full Text Available With extraordinary resolution and accuracy, Light Detection and Ranging (LiDAR-derived digital elevation models (DEMs have been increasingly used for watershed analyses and modeling by hydrologists, planners and engineers. Such high-accuracy DEMs have demonstrated their effectiveness in delineating watershed and drainage patterns at fine scales in low-relief terrains. However, these high-resolution datasets are usually only available as topographic DEMs rather than hydrologic DEMs, presenting greater land roughness that can affect natural flow accumulation. Specifically, locations of drainage structures such as road culverts and bridges were simulated as barriers to the passage of drainage. This paper proposed a geospatial method for producing LiDAR-derived hydrologic DEMs, which incorporates data collection of drainage structures (i.e., culverts and bridges, data preprocessing and burning of the drainage structures into DEMs. A case study of GIS-based watershed modeling in South Central Nebraska showed improved simulated surface water derivatives after the drainage structures were burned into the LiDAR-derived topographic DEMs. The paper culminates in a proposal and discussion of establishing a national or statewide drainage structure dataset.

  14. A constitutive rheological model for agglomerating blood derived from nonequilibrium thermodynamics

    Science.gov (United States)

    Tsimouri, Ioanna Ch.; Stephanou, Pavlos S.; Mavrantzas, Vlasis G.

    2018-03-01

    Red blood cells tend to aggregate in the presence of plasma proteins, forming structures known as rouleaux. Here, we derive a constitutive rheological model for human blood which accounts for the formation and dissociation of rouleaux using the generalized bracket formulation of nonequilibrium thermodynamics. Similar to the model derived by Owens and co-workers ["A non-homogeneous constitutive model for human blood. Part 1. Model derivation and steady flow," J. Fluid Mech. 617, 327-354 (2008)] through polymer network theory, each rouleau in our model is represented as a dumbbell; the corresponding structural variable is the conformation tensor of the dumbbell. The kinetics of rouleau formation and dissociation is treated as in the work of Germann et al. ["Nonequilibrium thermodynamic modeling of the structure and rheology of concentrated wormlike micellar solutions," J. Non-Newton. Fluid Mech. 196, 51-57 (2013)] by assuming a set of reversible reactions, each characterized by a forward and a reverse rate constant. The final set of evolution equations for the microstructure of each rouleau and the expression for the stress tensor turn out to be very similar to those of Owens and co-workers. However, by explicitly considering a mechanism for the formation and breakage of rouleaux, our model further provides expressions for the aggregation and disaggregation rates appearing in the final transport equations, which in the kinetic theory-based network model of Owens were absent and had to be specified separately. Despite this, the two models are found to provide similar descriptions of experimental data on the size distribution of rouleaux.

  15. Hydrologic Derivatives for Modeling and Analysis—A new global high-resolution database

    Science.gov (United States)

    Verdin, Kristine L.

    2017-07-17

    The U.S. Geological Survey has developed a new global high-resolution hydrologic derivative database. Loosely modeled on the HYDRO1k database, this new database, entitled Hydrologic Derivatives for Modeling and Analysis, provides comprehensive and consistent global coverage of topographically derived raster layers (digital elevation model data, flow direction, flow accumulation, slope, and compound topographic index) and vector layers (streams and catchment boundaries). The coverage of the data is global, and the underlying digital elevation model is a hybrid of three datasets: HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales), GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010), and the SRTM (Shuttle Radar Topography Mission). For most of the globe south of 60°N., the raster resolution of the data is 3 arc-seconds, corresponding to the resolution of the SRTM. For the areas north of 60°N., the resolution is 7.5 arc-seconds (the highest resolution of the GMTED2010 dataset) except for Greenland, where the resolution is 30 arc-seconds. The streams and catchments are attributed with Pfafstetter codes, based on a hierarchical numbering system, that carry important topological information. This database is appropriate for use in continental-scale modeling efforts. The work described in this report was conducted by the U.S. Geological Survey in cooperation with the National Aeronautics and Space Administration Goddard Space Flight Center.

  16. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  17. Bone marrow-derived mesenchymal stem cells influence early tendon-healing in a rabbit achilles tendon model.

    Science.gov (United States)

    Chong, Alphonsus K S; Ang, Abel D; Goh, James C H; Hui, James H P; Lim, Aymeric Y T; Lee, Eng Hin; Lim, Beng Hai

    2007-01-01

    A repaired tendon needs to be protected for weeks until it has accrued enough strength to handle physiological loads. Tissue-engineering techniques have shown promise in the treatment of tendon and ligament defects. The present study tested the hypothesis that bone marrow-derived mesenchymal stem cells can accelerate tendon-healing after primary repair of a tendon injury in a rabbit model. Fifty-seven New Zealand White rabbits were used as the experimental animals, and seven others were used as the source of bone marrow-derived mesenchymal stem cells. The injury model was a sharp complete transection through the midsubstance of the Achilles tendon. The transected tendon was immediately repaired with use of a modified Kessler suture and a running epitendinous suture. Both limbs were used, and each side was randomized to receive either bone marrow-derived mesenchymal stem cells in a fibrin carrier or fibrin carrier alone (control). Postoperatively, the rabbits were not immobilized. Specimens were harvested at one, three, six, and twelve weeks for analysis, which included evaluation of gross morphology (sixty-two specimens), cell tracing (twelve specimens), histological assessment (forty specimens), immunohistochemistry studies (thirty specimens), morphometric analysis (forty specimens), and mechanical testing (sixty-two specimens). There were no differences between the two groups with regard to the gross morphology of the tendons. The fibrin had degraded by three weeks. Cell tracing showed that labeled bone marrow-derived mesenchymal stem cells remained viable and present in the intratendinous region for at least six weeks, becoming more diffuse at later time-periods. At three weeks, collagen fibers appeared more organized and there were better morphometric nuclear parameters in the treatment group (p tendon repair can improve histological and biomechanical parameters in the early stages of tendon-healing.

  18. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  19. Modeling neurodegenerative diseases with patient-derived induced pluripotent cells: Possibilities and challenges.

    Science.gov (United States)

    Poon, Anna; Zhang, Yu; Chandrasekaran, Abinaya; Phanthong, Phetcharat; Schmid, Benjamin; Nielsen, Troels T; Freude, Kristine K

    2017-10-25

    The rising prevalence of progressive neurodegenerative diseases coupled with increasing longevity poses an economic burden at individual and societal levels. There is currently no effective cure for the majority of neurodegenerative diseases and disease-affected tissues from patients have been difficult to obtain for research and drug discovery in pre-clinical settings. While the use of animal models has contributed invaluable mechanistic insights and potential therapeutic targets, the translational value of animal models could be further enhanced when combined with in vitro models derived from patient-specific induced pluripotent stem cells (iPSCs) and isogenic controls generated using CRISPR-Cas9 mediated genome editing. The iPSCs are self-renewable and capable of being differentiated into the cell types affected by the diseases. These in vitro models based on patient-derived iPSCs provide the opportunity to model disease development, uncover novel mechanisms and test potential therapeutics. Here we review findings from iPSC-based modeling of selected neurodegenerative diseases, including Alzheimer's disease, frontotemporal dementia and spinocerebellar ataxia. Furthermore, we discuss the possibilities of generating three-dimensional (3D) models using the iPSCs-derived cells and compare their advantages and disadvantages to conventional two-dimensional (2D) models. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Spatially Resolved Isotopic Source Signatures of Wetland Methane Emissions

    Science.gov (United States)

    Ganesan, A. L.; Stell, A. C.; Gedney, N.; Comyn-Platt, E.; Hayman, G.; Rigby, M.; Poulter, B.; Hornibrook, E. R. C.

    2018-04-01

    We present the first spatially resolved wetland δ13C(CH4) source signature map based on data characterizing wetland ecosystems and demonstrate good agreement with wetland signatures derived from atmospheric observations. The source signature map resolves a latitudinal difference of 10‰ between northern high-latitude (mean -67.8‰) and tropical (mean -56.7‰) wetlands and shows significant regional variations on top of the latitudinal gradient. We assess the errors in inverse modeling studies aiming to separate CH4 sources and sinks by comparing atmospheric δ13C(CH4) derived using our spatially resolved map against the common assumption of globally uniform wetland δ13C(CH4) signature. We find a larger interhemispheric gradient, a larger high-latitude seasonal cycle, and smaller trend over the period 2000-2012. The implication is that erroneous CH4 fluxes would be derived to compensate for the biases imposed by not utilizing spatially resolved signatures for the largest source of CH4 emissions. These biases are significant when compared to the size of observed signals.

  1. 26 CFR 1.863-9 - Source of income derived from communications activity under section 863(a), (d), and (e).

    Science.gov (United States)

    2010-04-01

    ... SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulations Applicable... business within the United States is income from sources within the United States to the extent the income... taxpayer is paid to transmit the communication. Income derived by a United States or foreign person from...

  2. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  3. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    Energy Technology Data Exchange (ETDEWEB)

    Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  4. A new DG nanoscale TFET based on MOSFETs by using source gate electrode: 2D simulation and an analytical potential model

    Science.gov (United States)

    Ramezani, Zeinab; Orouji, Ali A.

    2017-08-01

    This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.

  5. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  6. Staphylococcus aureus utilizes host-derived lipoprotein particles as sources of exogenous fatty acids.

    Science.gov (United States)

    Delekta, Phillip C; Shook, John C; Lydic, Todd A; Mulks, Martha H; Hammer, Neal D

    2018-03-26

    Methicillin-resistant Staphylococcus aureus (MRSA) is a threat to global health. Consequently, much effort has focused on the development of new antimicrobials that target novel aspects of S. aureus physiology. Fatty acids are required to maintain cell viability, and bacteria synthesize fatty acids using the type II fatty acid synthesis pathway (FASII). FASII is significantly different from human fatty acid synthesis, underscoring the therapeutic potential of inhibiting this pathway. However, many Gram-positive pathogens incorporate exogenous fatty acids, bypassing FASII inhibition and leaving the clinical potential of FASII inhibitors uncertain. Importantly, the source(s) of fatty acids available to pathogens within the host environment remains unclear. Fatty acids are transported throughout the body by lipoprotein particles in the form of triglycerides and esterified cholesterol. Thus, lipoproteins, such as low-density lipoprotein (LDL) represent a potentially rich source of exogenous fatty acids for S. aureus during infection. We sought to test the ability of LDLs to serve as a fatty acid source for S. aureus and show that cells cultured in the presence of human LDLs demonstrate increased tolerance to the FASII inhibitor, triclosan. Using mass spectrometry, we observed that host-derived fatty acids present in the LDLs are incorporated into the staphylococcal membrane and that tolerance to triclosan is facilitated by the fatty acid kinase A, FakA, and Geh, a triacylglycerol lipase. Finally, we demonstrate that human LDLs support the growth of S. aureus fatty acid auxotrophs. Together, these results suggest that human lipoprotein particles are a viable source of exogenous fatty acids for S. aureus during infection. IMPORTANCE Inhibition of bacterial fatty acid synthesis is a promising approach to combating infections caused by S. aureus and other human pathogens. However, S. aureus incorporates exogenous fatty acids into its phospholipid bilayer. Therefore, the

  7. Use of the HadGEM2 climate-chemistry model to investigate interannual variability in methane sources

    Science.gov (United States)

    Hayman, Garry; O'Connor, Fiona; Clark, Douglas; Huntingford, Chris; Gedney, Nicola

    2013-04-01

    The global mean atmospheric concentration of methane (CH4) has more than doubled during the industrial era [1] and now constitutes ? 20% of the anthropogenic climate forcing by greenhouse gases [2]. The globally-averaged CH4 growth rate, derived from surface measurements, has fallen significantly from a high of 16 ppb yr-1 in the late 1970s/early 1980s and was close to zero between 1999 and 2006 [1]. This overall period of declining or low growth was however interspersed with years of positive growth-rate anomalies (e.g., in 1991-1992, 1998-1999 and 2002-2003). Since 2007, renewed growth has been evident [1, 3], with the largest increases observed over polar northern latitudes and the Southern Hemisphere in 2007 and in the tropics in 2008. The observed inter-annual variability in atmospheric methane concentrations and the associated changes in growth rates have variously been attributed to changes in different methane sources and sinks [1, 4]. In this paper, we report results from runs of the HadGEM2 climate-chemistry model [5] using year- and month-specific emission datasets. The HadGEM2 model includes the comprehensive atmospheric chemistry and aerosol package, the UK Chemistry Aerosol community model (UKCA, http://www.ukca.ac.uk/wiki/index.php). The Standard Tropospheric Chemistry scheme was selected for this work. This chemistry scheme simulates the Ox, HOx and NOx chemical cycles and the oxidation of CO, methane, ethane and propane. Year- and month-specific emission datasets were generated for the period from 1997 to 2009 for the emitted species in the chemistry scheme (CH4, CO, NOx, HCHO, C2H6, C3H8, CH3CHO, CH3CHOCH3). The approach adopted varied depending on the source sector: Anthropogenic: The emissions from anthropogenic sources were based on decadal-averaged emission inventories compiled by [6] for the Coupled Carbon Cycle Climate Model Intercomparison Project (C4MIP). These were then used to derive year-specific emission datasets by scaling the

  8. On a business cycle model with fractional derivative under narrow-band random excitation

    International Nuclear Information System (INIS)

    Lin, Zifei; Li, Jiaorui; Li, Shuang

    2016-01-01

    This paper analyzes the dynamics of a business cycle model with fractional derivative of order  α (0 < α < 1) subject to narrow-band random excitation, in which fractional derivative describes the memory property of the economic variables. Stochastic dynamical system concepts are integrated into the business cycle model for understanding the economic fluctuation. Firstly, the method of multiple scales is applied to derive the model to obtain the approximate analytical solution. Secondly, the effect of economic policy with fractional derivative on the amplitude of the economic fluctuation and the effect on stationary probability density are studied. The results show macroeconomic regulation and control can lower the stable amplitude of economic fluctuation. While in the process of equilibrium state, the amplitude is magnified. Also, the macroeconomic regulation and control improves the stability of the equilibrium state. Thirdly, how externally stochastic perturbation affects the dynamics of the economy system is investigated.

  9. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  10. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  11. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  12. Relativistic nuclear matter with alternative derivative coupling models

    International Nuclear Information System (INIS)

    Delfino, A.; Coelho, C.T.; Malheiro, M.

    1994-01-01

    Effective Lagrangians involving nucleons coupled to scalar and vector fields are investigated within the framework of relativistic mean-field theory. The study presents the traditional Walecka model and different kinds of scalar derivative coupling suggested by Zimanyi and Moszkowski. The incompressibility (presented in an analytical form), scalar potential, and vector potential at the saturation point of nuclear matter are compared for these models. The real optical potential for the models are calculated and one of the models fits well the experimental curve from-50 to 400 MeV while also gives a soft equation of state. By varying the coupling constants and keeping the saturation point of nuclear matter approximately fixed, only the Walecka model presents a first order phase transition of finite temperature at zero density. (author)

  13. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    Science.gov (United States)

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  14. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  15. Function of a deltaic silt deposit as a repository and long-term source of sulfate and related weathering products in a glaciofluvial aquifer derived from organic-rich shale (North Dakota, USA)

    Science.gov (United States)

    Schuh, W. M.; Bottrell, S. H.

    2014-05-01

    A shallow unconfined glaciofluvial aquifer in North Dakota (USA) has largest groundwater sulfate concentrations near the bottom boundary. A deltaic silt layer underlying the aquifer, at >16 m, is the modern proximate sulfate source for the aquifer. The original sulfate source was pyrite in the organic-rich shale component of the aquifer and silt grain matrix. An oxidizing event occurred during which grain-matrix pyrite sulfur was oxidized to sulfate. Thereafter the silt served as a "conserving" layer, slowly feeding sulfate into the lower part of the aquifer and the underlying till. A method was developed for estimating the approximate initial sulfate concentration in the source layer and the redistribution time since the oxidizing event, using a semi-generic convection-dispersion model. The convection-dispersion model and a model for the evolution of modern sulfate δ 34S in silt-layer pore water from the initial grain-matrix pyrite δ 34S, both estimated that the oxidizing event occurred several thousand years ago, and was likely related to the dry conditions of the Hypsithermal Interval. The silt layer also serves as an arsenic source. Results indicate that deltaic silts derived from organic-rich shale parent materials in a glacial environment can provide long-term sources for sulfate and arsenic and possibly other related oxidative weathering products.

  16. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  17. Ultrasound-assisted liposuction provides a source for functional adipose-derived stromal cells.

    Science.gov (United States)

    Duscher, Dominik; Maan, Zeshaan N; Luan, Anna; Aitzetmüller, Matthias M; Brett, Elizabeth A; Atashroo, David; Whittam, Alexander J; Hu, Michael S; Walmsley, Graham G; Houschyar, Khosrow S; Schilling, Arndt F; Machens, Hans-Guenther; Gurtner, Geoffrey C; Longaker, Michael T; Wan, Derrick C

    2017-12-01

    Regenerative medicine employs human mesenchymal stromal cells (MSCs) for their multi-lineage plasticity and their pro-regenerative cytokine secretome. Adipose-derived mesenchymal stromal cells (ASCs) are concentrated in fat tissue, and the ease of harvest via liposuction makes them a particularly interesting cell source. However, there are various liposuction methods, and few have been assessed regarding their impact on ASC functionality. Here we study the impact of the two most popular ultrasound-assisted liposuction (UAL) devices currently in clinical use, VASER (Solta Medical) and Lysonix 3000 (Mentor) on ASCs. After lipoaspirate harvest and processing, we sorted for ASCs using fluorescent-assisted cell sorting based on an established surface marker profile (CD34 + CD31 - CD45 - ). ASC yield, viability, osteogenic and adipogenic differentiation capacity and in vivo regenerative performance were assessed. Both UAL samples demonstrated equivalent ASC yield and viability. VASER UAL ASCs showed higher osteogenic and adipogenic marker expression, but a comparable differentiation capacity was observed. Soft tissue healing and neovascularization were significantly enhanced via both UAL-derived ASCs in vivo, and there was no significant difference between the cell therapy groups. Taken together, our data suggest that UAL allows safe and efficient harvesting of the mesenchymal stromal cellular fraction of adipose tissue and that cells harvested via this approach are suitable for cell therapy and tissue engineering applications. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  18. Deriving the Dividend Discount Model in the Intermediate Microeconomics Class

    Science.gov (United States)

    Norman, Stephen; Schlaudraff, Jonathan; White, Karianne; Wills, Douglas

    2013-01-01

    In this article, the authors show that the dividend discount model can be derived using the basic intertemporal consumption model that is introduced in a typical intermediate microeconomics course. This result will be of use to instructors who teach microeconomics to finance students in that it demonstrates the value of utility maximization in…

  19. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  20. Operational derivation of Boltzmann distribution with Maxwell's demon model.

    Science.gov (United States)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2015-11-24

    The resolution of the Maxwell's demon paradox linked thermodynamics with information theory through information erasure principle. By considering a demon endowed with a Turing-machine consisting of a memory tape and a processor, we attempt to explore the link towards the foundations of statistical mechanics and to derive results therein in an operational manner. Here, we present a derivation of the Boltzmann distribution in equilibrium as an example, without hypothesizing the principle of maximum entropy. Further, since the model can be applied to non-equilibrium processes, in principle, we demonstrate the dissipation-fluctuation relation to show the possibility in this direction.

  1. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  2. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  3. A Two-Temperature Open-Source CFD Model for Hypersonic Reacting Flows, Part One: Zero-Dimensional Analysis

    Directory of Open Access Journals (Sweden)

    Vincent Casseau

    2016-10-01

    Full Text Available A two-temperature CFD (computational fluid dynamics solver is a prerequisite to any spacecraft re-entry numerical study that aims at producing results with a satisfactory level of accuracy within realistic timescales. In this respect, a new two-temperature CFD solver, hy2Foam, has been developed within the framework of the open-source CFD platform OpenFOAM for the prediction of hypersonic reacting flows. This solver makes the distinct juncture between the trans-rotational and multiple vibrational-electronic temperatures. hy2Foam has the capability to model vibrational-translational and vibrational-vibrational energy exchanges in an eleven-species air mixture. It makes use of either the Park TTv model or the coupled vibration-dissociation-vibration (CVDV model to handle chemistry-vibration coupling and it can simulate flows with or without electronic energy. Verification of the code for various zero-dimensional adiabatic heat baths of progressive complexity has been carried out. hy2Foam has been shown to produce results in good agreement with those given by the CFD code LeMANS (The Michigan Aerothermodynamic Navier-Stokes solver and previously published data. A comparison is also performed with the open-source DSMC (direct simulation Monte Carlo code dsmcFoam. It has been demonstrated that the use of the CVDV model and rates derived from Quantum-Kinetic theory promote a satisfactory consistency between the CFD and DSMC chemistry modules.

  4. Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali; Fomel, Sergey; Wu, Zedong

    2014-01-01

    or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth

  5. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  6. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  7. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  8. Stem cell-derived vasculature: A potent and multidimensional technology for basic research, disease modeling, and tissue engineering.

    Science.gov (United States)

    Lowenthal, Justin; Gerecht, Sharon

    2016-05-06

    Proper blood vessel networks are necessary for constructing and re-constructing tissues, promoting wound healing, and delivering metabolic necessities throughout the body. Conversely, an understanding of vascular dysfunction has provided insight into the pathogenesis and progression of diseases both common and rare. Recent advances in stem cell-based regenerative medicine - including advances in stem cell technologies and related progress in bioscaffold design and complex tissue engineering - have allowed rapid advances in the field of vascular biology, leading in turn to more advanced modeling of vascular pathophysiology and improved engineering of vascularized tissue constructs. In this review we examine recent advances in the field of stem cell-derived vasculature, providing an overview of stem cell technologies as a source for vascular cell types and then focusing on their use in three primary areas: studies of vascular development and angiogenesis, improved disease modeling, and the engineering of vascularized constructs for tissue-level modeling and cell-based therapies. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Derivation of inner magnetospheric electric field (UNH-IMEF model using Cluster data set

    Directory of Open Access Journals (Sweden)

    H. Matsui

    2008-09-01

    Full Text Available We derive an inner magnetospheric electric field (UNH-IMEF model at L=2–10 using primarily Cluster electric field data for more than 5 years between February 2001 and October 2006. This electric field data set is divided into several ranges of the interplanetary electric field (IEF values measured by ACE. As ring current simulations which require electric field as an input parameter are often performed at L=2–6.6, we have included statistical results from ground radars and low altitude satellites inside the perigee of Cluster in our data set (L~4. Electric potential patterns are derived from the average electric fields by solving an inverse problem. The electric potential pattern for small IEF values is probably affected by the ionospheric dynamo. The magnitudes of the electric field increase around the evening local time as IEF increases, presumably due to the sub-auroral polarization stream (SAPS. Another region with enhanced electric fields during large IEF periods is located around 9 MLT at L>8, which is possibly related to solar wind-magnetosphere coupling. Our potential patterns are consistent with those derived from self-consistent simulations. As the potential patterns can be interpolated/extrapolated to any discrete IEF value within measured ranges, we thus derive an empirical electric potential model. The performance of the model is evaluated by comparing the electric field derived from the model with original one measured by Cluster and mapped to the equator. The model is open to the public through our website.

  10. Derivation of inner magnetospheric electric field (UNH-IMEF model using Cluster data set

    Directory of Open Access Journals (Sweden)

    H. Matsui

    2008-09-01

    Full Text Available We derive an inner magnetospheric electric field (UNH-IMEF model at L=2–10 using primarily Cluster electric field data for more than 5 years between February 2001 and October 2006. This electric field data set is divided into several ranges of the interplanetary electric field (IEF values measured by ACE. As ring current simulations which require electric field as an input parameter are often performed at L=2–6.6, we have included statistical results from ground radars and low altitude satellites inside the perigee of Cluster in our data set (L~4. Electric potential patterns are derived from the average electric fields by solving an inverse problem. The electric potential pattern for small IEF values is probably affected by the ionospheric dynamo. The magnitudes of the electric field increase around the evening local time as IEF increases, presumably due to the sub-auroral polarization stream (SAPS. Another region with enhanced electric fields during large IEF periods is located around 9 MLT at L>8, which is possibly related to solar wind-magnetosphere coupling. Our potential patterns are consistent with those derived from self-consistent simulations. As the potential patterns can be interpolated/extrapolated to any discrete IEF value within measured ranges, we thus derive an empirical electric potential model. The performance of the model is evaluated by comparing the electric field derived from the model with original one measured by Cluster and mapped to the equator. The model is open to the public through our website.

  11. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  12. Patient-Derived Xenograft Models : An Emerging Platform for Translational Cancer Research

    NARCIS (Netherlands)

    Hidalgo, Manuel; Amant, Frederic; Biankin, Andrew V.; Budinska, Eva; Byrne, Annette T.; Caldas, Carlos; Clarke, Robert B.; de Jong, Steven; Jonkers, Jos; Maelandsmo, Gunhild Mari; Roman-Roman, Sergio; Seoane, Joan; Trusolino, Livio; Villanueva, Alberto

    Recently, there has been an increasing interest in the development and characterization of patient-derived tumor xenograft (PDX) models for cancer research. PDX models mostly retain the principal histologic and genetic characteristics of their donor tumor and remain stable across passages. These

  13. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  14. Columnar metaplasia in a surgical mouse model of gastro-esophageal reflux disease is not derived from bone marrow-derived cell.

    Science.gov (United States)

    Aikou, Susumu; Aida, Junko; Takubo, Kaiyo; Yamagata, Yukinori; Seto, Yasuyuki; Kaminishi, Michio; Nomura, Sachiyo

    2013-09-01

    The incidence of esophageal adenocarcinoma has increased in the last 25 years. Columnar metaplasia in Barrett's mucosa is assumed to be a precancerous lesion for esophageal adenocarcinoma. However, the induction process of Barrett's mucosa is still unknown. To analyze the induction of esophageal columnar metaplasia, we established a mouse gastro-esophageal reflux disease (GERD) model with associated development of columnar metaplasia in the esophagus. C57BL/6 mice received side-to-side anastomosis of the esophagogastric junction with the jejunum, and mice were killed 10, 20, and 40 weeks after operation. To analyze the contribution of bone marrow-derived cells to columnar metaplasia in this surgical GERD model, some mice were transplanted with GFP-marked bone marrow after the operation. Seventy-three percent of the mice (16/22) showed thickened mucosa in esophagus and 41% of mice (9/22) developed columnar metaplasia 40 weeks after the operation with a mortality rate of 4%. Bone marrow-derived cells were not detected in columnar metaplastic epithelia. However, scattered epithelial cells in the thickened squamous epithelia in regions of esophagitis did show bone marrow derivation. The results demonstrate that reflux induced by esophago-jejunostomy in mice leads to the development of columnar metaplasia in the esophagus. However, bone marrow-derived cells do not contribute directly to columnar metaplasia in this mouse model. © 2013 Japanese Cancer Association.

  15. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  16. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  17. Chitosan derivatives targeting lipid bilayers: Synthesis, biological activity and interaction with model membranes.

    Science.gov (United States)

    Martins, Danubia Batista; Nasário, Fábio Domingues; Silva-Gonçalves, Laiz Costa; de Oliveira Tiera, Vera Aparecida; Arcisio-Miranda, Manoel; Tiera, Marcio José; Dos Santos Cabrera, Marcia Perez

    2018-02-01

    The antimicrobial activity of chitosan and derivatives to human and plant pathogens represents a high-valued prospective market. Presently, two low molecular weight derivatives, endowed with hydrophobic and cationic character at different ratios were synthesized and characterized. They exhibit antimicrobial activity and increased performance in relation to the intermediate and starting compounds. However, just the derivative with higher cationic character showed cytotoxicity towards human cervical carcinoma cells. Considering cell membranes as targets, the mode of action was investigated through the interaction with model lipid vesicles mimicking bacterial, tumoral and erythrocyte membranes. Intense lytic activity and binding are demonstrated for both derivatives in anionic bilayers. The less charged compound exhibits slightly improved selectivity towards bacterial model membranes, suggesting that balancing its hydrophobic/hydrophilic character may improve efficiency. Observing the aggregation of vesicles, we hypothesize that the "charge cluster mechanism", ascribed to some antimicrobial peptides, could be applied to these chitosan derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  19. Google Earth's derived digital elevation model: A comparative assessment with Aster and SRTM data

    International Nuclear Information System (INIS)

    Rusli, N; Majid, M R; Din, A H M

    2014-01-01

    This paper presents a statistical analysis showing additional evidence that Digital Elevation Model (DEM) derived from Google Earth is commendable and has a good correlation with ASTER (Advanced Space-borne Thermal Emission and Reflection Radiometer) and SRTM (Shuttle Radar Topography Mission) elevation data. The accuracy of DEM elevation points from Google Earth was compared against that of DEMs from ASTER and SRTM for flat, hilly and mountainous sections of a pre-selected rural watershed. For each section, a total of 5,000 DEM elevation points were extracted as samples from each type of DEM data. The DEM data from Google Earth and SRTM for flat and hilly sections are strongly correlated with the R 2 of 0.791 and 0.891 respectively. Even stronger correlation is shown for the mountainous section where the R 2 values between Google Earth's DEM and ASTER's and between Google Earth's DEM and SRTM's DEMs are respectively 0.917 and 0.865. Further accuracy testing was carried out by utilising the DEM dataset to delineate Muar River's watershed boundary using ArcSWAT2009, a hydrological modelling software. The result shows that the percentage differences of the watershed size delineated from Google Earth's DEM compared to those derived from Department of Irrigation and Drainage's data (using 20m-contour topographic map), ASTER and SRTM data are 9.6%, 10.6%, and 7.6% respectively. It is therefore justified to conclude that the DEM derived from Google Earth is relatively as acceptable as DEMs from other sources

  20. Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-10-08

    Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.

  1. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  2. In vitro generation of three-dimensional substrate-adherent embryonic stem cell-derived neural aggregates for application in animal models of neurological disorders.

    Science.gov (United States)

    Hargus, Gunnar; Cui, Yi-Fang; Dihné, Marcel; Bernreuther, Christian; Schachner, Melitta

    2012-05-01

    In vitro-differentiated embryonic stem (ES) cells comprise a useful source for cell replacement therapy, but the efficiency and safety of a translational approach are highly dependent on optimized protocols for directed differentiation of ES cells into the desired cell types in vitro. Furthermore, the transplantation of three-dimensional ES cell-derived structures instead of a single-cell suspension may improve graft survival and function by providing a beneficial microenvironment for implanted cells. To this end, we have developed a new method to efficiently differentiate mouse ES cells into neural aggregates that consist predominantly (>90%) of postmitotic neurons, neural progenitor cells, and radial glia-like cells. When transplanted into the excitotoxically lesioned striatum of adult mice, these substrate-adherent embryonic stem cell-derived neural aggregates (SENAs) showed significant advantages over transplanted single-cell suspensions of ES cell-derived neural cells, including improved survival of GABAergic neurons, increased cell migration, and significantly decreased risk of teratoma formation. Furthermore, SENAs mediated functional improvement after transplantation into animal models of Parkinson's disease and spinal cord injury. This unit describes in detail how SENAs are efficiently derived from mouse ES cells in vitro and how SENAs are isolated for transplantation. Furthermore, methods are presented for successful implantation of SENAs into animal models of Huntington's disease, Parkinson's disease, and spinal cord injury to study the effects of stem cell-derived neural aggregates in a disease context in vivo.

  3. Improving seismic crustal models in the Corinth Gulf, Greece and estimating source depth using PL-waves

    Science.gov (United States)

    Vackář, Jiří; Zahradník, Jiří

    2013-04-01

    A recent shallow earthquake in the Corinth Gulf, Greece (Mw 5.3, January 18, 2010; Sokos et al., Tectonophysics 2012) generated unusual long-period waves (periods > 5 seconds), well recorded at several near-regional stations between the P - and S-wave arrival. The 5-second period, being significantly longer than the source duration, indicates a structural effect. The wave is similar to PL-wave or Pnl-wave, but with shorter periods and observed in much closer distances (ranging from 30 to 200 km). For theoretical description of the observed wave, structural model is required. No existing regional crustal model generates that wave, so we need to find another model, better in terms of the PL-wave existence and strength. We find such models by full waveform inversion using the subset of stations with strong PL-wave. The Discrete Wavenumber method (Bouchon, 1981; Coutant 1989) is used for forward problem and the Neighborhood Algorithm (Sambridge, 1999) for stochastic search (more details in poster by V. Plicka and J. Zahradník). We obtain a suite of models well fitting synthetic seismograms and use some of these models to evaluate dependence of the studied waves on receiver distance and azimuth as well as dependence on source depth. We compare real and synthetic dispersion curves (derived from synthetic seismograms) as an independent validation of found model and discuss limitations of using dispersion curves for these cases. We also relocated the event in the new model. Then we calculate the wavefield by two other methods: modal summation and ray theory to better understand the nature of the PL-wave. Finally, we discuss agreement of found models with published crustal models in the region. The full waveform inversion for structural parameters seems to be powerful tool for improving seismic source modeling in cases we do not have accurate structure model of studied area. We also show that the PL-wave strength has a potential to precise the earthquake depth

  4. Amniotic Fluid Stem Cells: A Novel Source for Modeling of Human Genetic Diseases

    Directory of Open Access Journals (Sweden)

    Ivana Antonucci

    2016-04-01

    Full Text Available In recent years, great interest has been devoted to the use of Induced Pluripotent Stem cells (iPS for modeling of human genetic diseases, due to the possibility of reprogramming somatic cells of affected patients into pluripotent cells, enabling differentiation into several cell types, and allowing investigations into the molecular mechanisms of the disease. However, the protocol of iPS generation still suffers from technical limitations, showing low efficiency, being expensive and time consuming. Amniotic Fluid Stem cells (AFS represent a potential alternative novel source of stem cells for modeling of human genetic diseases. In fact, by means of prenatal diagnosis, a number of fetuses affected by chromosomal or Mendelian diseases can be identified, and the amniotic fluid collected for genetic testing can be used, after diagnosis, for the isolation, culture and differentiation of AFS cells. This can provide a useful stem cell model for the investigation of the molecular basis of the diagnosed disease without the necessity of producing iPS, since AFS cells show some features of pluripotency and are able to differentiate in cells derived from all three germ layers “in vitro”. In this article, we describe the potential benefits provided by using AFS cells in the modeling of human genetic diseases.

  5. Biomass burning source characterization requirements in air quality models with and without data assimilation: challenges and opportunities

    Science.gov (United States)

    Hyer, E. J.; Zhang, J. L.; Reid, J. S.; Curtis, C. A.; Westphal, D. L.

    2007-12-01

    Quantitative models of the transport and evolution of atmospheric pollution have graduated from the laboratory to become a part of the operational activity of forecast centers. Scientists studying the composition and variability of the atmosphere put great efforts into developing methods for accurately specifying sources of pollution, including natural and anthropogenic biomass burning. These methods must be adapted for use in operational contexts, which impose additional strictures on input data and methods. First, only input data sources available in near real-time are suitable for use in operational applications. Second, operational applications must make use of redundant data sources whenever possible. This is a shift in philosophy: in a research context, the most accurate and complete data set will be used, whereas in an operational context, the system must be designed with maximum redundancy. The goal in an operational context is to produce, to the extent possible, consistent and timely output, given sometimes inconsistent inputs. The Naval Aerosol Analysis and Prediction System (NAAPS), a global operational aerosol analysis and forecast system, recently began incorporating assimilation of satellite-derived aerosol optical depth. Assimilation of satellite AOD retrievals has dramatically improved aerosol analyses and forecasts from this system. The use of aerosol data assimilation also changes the strategy for improving the smoke source function. The absolute magnitude of emissions events can be refined through feedback from the data assimilation system, both in real- time operations and in post-processing analysis of data assimilation results. In terms of the aerosol source functions, the largest gains in model performance are now to be gained by reducing data latency and minimizing missed detections. In this presentation, recent model development work on the Fire Locating and Monitoring of Burning Emissions (FLAMBE) system that provides smoke aerosol

  6. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    Science.gov (United States)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  7. Interaction of hematoporphyrin derivative, light, and ionizing radiation in a rat glioma model

    International Nuclear Information System (INIS)

    Kostron, H.; Swartz, M.R.; Miller, D.C.; Martuza, R.L.

    1986-01-01

    The effects of hematoporphyrin derivative, light, and cobalt 60 ( 60 Co) irradiation were studied in a rat glioma model using an in vivo and an in vitro clonogenic assay. There was no effect on tumor growth by visible light or by a single dose of 60 Co irradiation at 4 Gy or 8 Gy, whereas 16 Gy inhibited tumor growth to 40% versus the control. Hematoporphyrin derivative alone slightly stimulated growth (P less than 0.1). Light in the presence of 10 mg hematoporphyrin derivative/kg inhibited tumor growth to 32%. 60 Co irradiation in the presence of hematoporphyrin derivative produced a significant tumor growth inhibition (P less than 0.02). This growth inhibition was directly related to the concentration of hematoporphyrin derivative. The addition of 60 Co to light in the presence of hematoporphyrin derivative produced a greater growth inhibition than light or 60 Co irradiation alone. This effect was most pronounced when light was applied 30 minutes before 60 Co irradiation. Our experiments in a subcutaneous rat glioma model suggest a radiosensitizing effect of hematoporphyrin derivative. Furthermore, the photodynamic inactivation is enhanced by the addition of 60 Co irradiation. These findings may be of importance in planning new treatment modalities in malignant brain tumors

  8. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  9. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  10. Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling

    International Nuclear Information System (INIS)

    Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.

    2007-01-01

    Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources

  11. Gauge coupling unification in superstring derived standard-like models

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1992-11-01

    I discuss gauge coupling unification in a class of superstring standard-like models, which are derived in the free fermionic formulation. Recent calculations indicate that the superstring unification scale is at O(10 18 GeV) while the minimal supersymmetric standard model is consistent with LEP data if the unification scale is at O(10 16 )GeV. A generic feature of the superstring standard-like models is the appearance of extra color triplets (D,D), and electroweak doublets (l,l), in vector-like representations, beyond the supersymmetric standard model. I show that the gauge coupling unification at O(10 18 GeV) in the superstring standard-like models can be consistent with LEP data. I present an explicit standard-like model that can realize superstring gauge coupling unification. (author)

  12. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  13. The impacts of source structure on geodetic parameters demonstrated by the radio source 3C371

    Science.gov (United States)

    Xu, Ming H.; Heinkelmann, Robert; Anderson, James M.; Mora-Diaz, Julian; Karbon, Maria; Schuh, Harald; Wang, Guang L.

    2017-07-01

    Closure quantities measured by very-long-baseline interferometry (VLBI) observations are independent of instrumental and propagation instabilities and antenna gain factors, but are sensitive to source structure. A new method is proposed to calculate a structure index based on the median values of closure quantities rather than the brightness distribution of a source. The results are comparable to structure indices based on imaging observations at other epochs and demonstrate the flexibility of deriving structure indices from exactly the same observations as used for geodetic analysis and without imaging analysis. A three-component model for the structure of source 3C371 is developed by model-fitting closure phases. It provides a real case of tracing how the structure effect identified by closure phases in the same observations as the delay observables affects the geodetic analysis, and investigating which geodetic parameters are corrupted to what extent by the structure effect. Using the resulting structure correction based on the three-component model of source 3C371, two solutions, with and without correcting the structure effect, are made. With corrections, the overall rms of this source is reduced by 1 ps, and the impacts of the structure effect introduced by this single source are up to 1.4 mm on station positions and up to 4.4 microarcseconds on Earth orientation parameters. This study is considered as a starting point for handling the source structure effect on geodetic VLBI from geodetic sessions themselves.

  14. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  15. Solutions of Cattaneo-Hristov model of elastic heat diffusion with Caputo-Fabrizio and Atangana-Baleanu fractional derivatives

    Directory of Open Access Journals (Sweden)

    Koca Ilknur

    2017-01-01

    Full Text Available Recently Hristov using the concept of a relaxation kernel with no singularity developed a new model of elastic heat diffusion equation based on the Caputo-Fabrizio fractional derivative as an extended version of Cattaneo model of heat diffusion equation. In the present article, we solve exactly the Cattaneo-Hristov model and extend it by the concept of a derivative with non-local and non-singular kernel by using the new Atangana-Baleanu derivative. The Cattaneo-Hristov model with the extended derivative is solved analytically with the Laplace transform, and numerically using the Crank-Nicholson scheme.

  16. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  17. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  18. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    Science.gov (United States)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of

  19. CHAOS-2-a geomagnetic field model derived from one decade of continuous satellite data

    DEFF Research Database (Denmark)

    Olsen, Nils; Mandea, M.; Sabaka, T.J.

    2009-01-01

    We have derived a model of the near-Earth's magnetic field using more than 10 yr of high-precision geomagnetic measurements from the three satellites Orsted, CHAMP and SAC-C. This model is an update of the two previous models, CHAOS (Olsen et al. 2006) and xCHAOS (Olsen & Mandea 2008). Data...... by minimizing the second time derivative of the squared magnetic field intensity at the core-mantle boundary. The CHAOS-2 model describes rapid time changes, as monitored by the ground magnetic observatories, much better than its predecessors....

  20. Application of Open Source Software by the Lunar Mapping and Modeling Project

    Science.gov (United States)

    Ramirez, P.; Goodale, C. E.; Bui, B.; Chang, G.; Kim, R. M.; Law, E.; Malhotra, S.; Rodriguez, L.; Sadaqathullah, S.; Mattmann, C. A.; Crichton, D. J.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is responsible for the development of an information system to support lunar exploration, decision analysis, and release of lunar data to the public. The data available through the lunar portal is predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). This project has created a gold source of data, models, and tools for lunar explorers to exercise and incorporate into their activities. At Jet Propulsion Laboratory (JPL), we focused on engineering and building the infrastructure to support cataloging, archiving, accessing, and delivery of lunar data. We decided to use a RESTful service-oriented architecture to enable us to abstract from the underlying technology choices and focus on interfaces to be used internally and externally. This decision allowed us to leverage several open source software components and integrate them by either writing a thin REST service layer or relying on the API they provided; the approach chosen was dependent on the targeted consumer of a given interface. We will discuss our varying experience using open source products; namely Apache OODT, Oracle Berkley DB XML, Apache Solr, and Oracle OpenSSO (now named OpenAM). Apache OODT, developed at NASA's Jet Propulsion Laboratory and recently migrated over to Apache, provided the means for ingestion and cataloguing of products within the infrastructure. Its usage was based upon team experience with the project and past benefit received on other projects internal and external to JPL. Berkeley DB XML, distributed by Oracle for both commercial and open source use, was the storage technology chosen for our metadata. This decision was in part based on our use Federal Geographic Data Committee (FGDC) Metadata, which is expressed in XML, and the desire to keep it in its native form and exploit other technologies built on

  1. Modeling extracellular electrical stimulation: I. Derivation and interpretation of neurite equations.

    Science.gov (United States)

    Meffin, Hamish; Tahayori, Bahman; Grayden, David B; Burkitt, Anthony N

    2012-12-01

    Neuroprosthetic devices, such as cochlear and retinal implants, work by directly stimulating neurons with extracellular electrodes. This is commonly modeled using the cable equation with an applied extracellular voltage. In this paper a framework for modeling extracellular electrical stimulation is presented. To this end, a cylindrical neurite with confined extracellular space in the subthreshold regime is modeled in three-dimensional space. Through cylindrical harmonic expansion of Laplace's equation, we derive the spatio-temporal equations governing different modes of stimulation, referred to as longitudinal and transverse modes, under types of boundary conditions. The longitudinal mode is described by the well-known cable equation, however, the transverse modes are described by a novel ordinary differential equation. For the longitudinal mode, we find that different electrotonic length constants apply under the two different boundary conditions. Equations connecting current density to voltage boundary conditions are derived that are used to calculate the trans-impedance of the neurite-plus-thin-extracellular-sheath. A detailed explanation on depolarization mechanisms and the dominant current pathway under different modes of stimulation is provided. The analytic results derived here enable the estimation of a neurite's membrane potential under extracellular stimulation, hence bypassing the heavy computational cost of using numerical methods.

  2. Yukawa couplings in Superstring derived Standard-like models

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1991-01-01

    I discuss Yukawa couplings in Standard-like models which are derived from Superstring in the free fermionic formulation. I introduce new notation for the construction of these models. I show how choice of boundary conditions selects a trilevel Yukawa coupling either for +2/3 charged quark or for -1/3 charged quark. I prove this selection rule. I make the conjecture that in this class of standard-like models a possible connection may exist between the requirements of F and D flatness at the string level and the heaviness of the top quark relative to lighter quarks and leptons. I discuss how the choice of boundary conditions determines the non vanishing mass terms for quartic order terms. I discuss the implication on the mass of the top quark. (author)

  3. Derivation of a well-posed and multidimensional drift-flux model for boiling flows

    International Nuclear Information System (INIS)

    Gregoire, O.; Martin, M.

    2005-01-01

    In this note, we derive a multidimensional drift-flux model for boiling flows. Within this framework, the distribution parameter is no longer a scalar but a tensor that might account for the medium anisotropy and the flow regime. A new model for the drift-velocity vector is also derived. It intrinsically takes into account the effect of the friction pressure loss on the buoyancy force. On the other hand, we show that most drift-flux models might exhibit a singularity for large void fraction. In order to avoid this singularity, a remedy based on a simplified three field approach is proposed. (authors)

  4. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  5. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  6. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    Science.gov (United States)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  7. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  8. The Importance of Physical Models for Deriving Dust Masses and Grain Size Distributions in Supernova Ejecta. I. Radiatively Heated Dust in the Crab Nebula

    Science.gov (United States)

    Temim, Tea; Dwek, Eli

    2013-01-01

    Recent far-infrared (IR) observations of supernova remnants (SNRs) have revealed significantly large amounts of newly condensed dust in their ejecta, comparable to the total mass of available refractory elements. The dust masses derived from these observations assume that all the grains of a given species radiate at the same temperature, regardless of the dust heating mechanism or grain radius. In this paper, we derive the dust mass in the ejecta of the Crab Nebula, using a physical model for the heating and radiation from the dust. We adopt a power-law distribution of grain sizes and two different dust compositions (silicates and amorphous carbon), and calculate the heating rate of each dust grain by the radiation from the pulsar wind nebula. We find that the grains attain a continuous range of temperatures, depending on their size and composition. The total mass derived from the best-fit models to the observed IR spectrum is 0.019-0.13 Solar Mass, depending on the assumed grain composition. We find that the power-law size distribution of dust grains is characterized by a power-law index of 3.5-4.0 and a maximum grain size larger than 0.1 micron. The grain sizes and composition are consistent with what is expected for dust grains formed in a Type IIP supernova (SN). Our derived dust mass is at least a factor of two less than the mass reported in previous studies of the Crab Nebula that assumed more simplified two-temperature models. These models also require a larger mass of refractory elements to be locked up in dust than was likely available in the ejecta. The results of this study show that a physical model resulting in a realistic distribution of dust temperatures can constrain the dust properties and affect the derived dust masses. Our study may also have important implications for deriving grain properties and mass estimates in other SNRs and for the ultimate question of whether SNe are major sources of dust in the Galactic interstellar medium and in

  9. Foetal stem cell derivation & characterization for osteogenic lineage

    Directory of Open Access Journals (Sweden)

    A Mangala Gowri

    2013-01-01

    Full Text Available Background & objectives: Mesencymal stem cells (MSCs derived from foetal tissues present a multipotent progenitor cell source for application in tissue engineering and regenerative medicine. The present study was carried out to derive foetal mesenchymal stem cells from ovine source and analyze their differentiation to osteogenic linage to serve as an animal model to predict human applications. Methods: Isolation and culture of sheep foetal bone marrow cells were done and uniform clonally derived MSC population was collected. The cells were characterized using cytochemical, immunophenotyping, biochemical and molecular analyses. The cells with defined characteristics were differentiated into osteogenic lineages and analysis for differentiated cell types was done. The cells were analyzed for cell surface marker expression and the gene expression in undifferentiated and differentiated osteoblast was checked by reverse transcriptase PCR (RT PCR analysis and confirmed by sequencing using genetic analyzer. Results: Ovine foetal samples were processed to obtain mononuclear (MNC cells which on culture showed spindle morphology, a characteristic oval body with the flattened ends. MSC population CD45 - /CD14 - was cultured by limiting dilution to arrive at uniform spindle morphology cells and colony forming units. The cells were shown to be positive for surface markers such as CD44, CD54, integrinβ1, and intracellular collagen type I/III and fibronectin. The osteogenically induced MSCs were analyzed for alkaline phosphatase (ALP activity and mineral deposition. The undifferentiated MSCs expressed RAB3B, candidate marker for stemness in MSCs. The osteogenically induced and uninduced MSCs expressed collagen type I and MMP13 gene in osteogenic induced cells. Interpretation & conclusions: The protocol for isolation of ovine foetal bone marrow derived MSCs was simple to perform, and the cultural method of obtaining pure spindle morphology cells was established

  10. Computational optogenetics: empirically-derived voltage- and light-sensitive channelrhodopsin-2 model.

    Directory of Open Access Journals (Sweden)

    John C Williams

    Full Text Available Channelrhodospin-2 (ChR2, a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1 accurate inward rectification in the current-voltage response across irradiances; 2 empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation; and 3 accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10 were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and

  11. Considering a point-source in a regional air pollution model; Prise en compte d`une source ponctuelle dans un modele regional de pollution atmospherique

    Energy Technology Data Exchange (ETDEWEB)

    Lipphardt, M.

    1997-06-19

    This thesis deals with the development and validation of a point-source plume model, with the aim to refine the representation of intensive point-source emissions in regional-scale air quality models. The plume is modelled at four levels of increasing complexity, from a modified Gaussian plume model to the Freiberg and Lusis ring model. Plume elevation is determined by Netterville`s plume rise model, using turbulence and atmospheric stability parameters. A model for the effect of a fine-scale turbulence on the mean concentrations in the plume is developed and integrated in the ring model. A comparison between results with and without considering micro-mixing shows the importance of this effect in a chemically reactive plume. The plume model is integrated into the Eulerian transport/chemistry model AIRQUAL, using an interface between Airqual and the sub-model, and interactions between the two scales are described. A simulation of an air pollution episode over Paris is carried out, showing that the utilization of such a sub-scale model improves the accuracy of the air quality model

  12. Combined use of stable isotopes and hydrologic modeling to better understand nutrient sources and cycling in highly altered systems (Invited)

    Science.gov (United States)

    Young, M. B.; Kendall, C.; Guerin, M.; Stringfellow, W. T.; Silva, S. R.; Harter, T.; Parker, A.

    2013-12-01

    The Sacramento and San Joaquin Rivers provide the majority of freshwater for the San Francisco Bay Delta. Both rivers are important sources of drinking and irrigation water for California, and play critical roles in the health of California fisheries. Understanding the factors controlling water quality and primary productivity in these rivers and the Delta is essential for making sound economic and environmental water management decisions. However, these highly altered surface water systems present many challenges for water quality monitoring studies due to factors such as multiple potential nutrient and contaminant inputs, dynamic source water inputs, and changing flow regimes controlled by both natural and engineered conditions. The watersheds for both rivers contain areas of intensive agriculture along with many other land uses, and the Sacramento River receives significant amounts of treated wastewater from the large population around the City of Sacramento. We have used a multi-isotope approach combined with mass balance and hydrodynamic modeling in order to better understand the dominant nutrient sources for each of these rivers, and to track nutrient sources and cycling within the complex Delta region around the confluence of the rivers. High nitrate concentrations within the San Joaquin River fuel summer algal blooms, contributing to low dissolved oxygen conditions. High δ15N-NO3 values combined with the high nitrate concentrations suggest that animal manure is a significant source of nitrate to the San Joaquin River. In contrast, the Sacramento River has lower nitrate concentrations but elevated ammonium concentrations from wastewater discharge. Downstream nitrification of the ammonium can be clearly traced using δ15N-NH4. Flow conditions for these rivers and the Delta have strong seasonal and inter-annual variations, resulting in significant changes in nutrient delivery and cycling. Isotopic measurements and estimates of source water contributions

  13. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    International Nuclear Information System (INIS)

    Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul

    2008-01-01

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles

  14. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)

    2008-11-15

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.

  15. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    Science.gov (United States)

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  16. Statistical modeling of phenological phases in Poland based on coupling satellite derived products and gridded meteorological data

    Science.gov (United States)

    Czernecki, Bartosz; Jabłońska, Katarzyna; Nowosad, Jakub

    2016-04-01

    The aim of the study was to create and evaluate different statistical models for reconstructing and predicting selected phenological phases. This issue is of particular importance in Poland where national-wide phenological monitoring was abandoned in the middle of 1990s and the reactivated network was established in 2006. Authors decided to evaluate possibilities of using a wide-range of statistical modeling techniques to create synthetic archive dataset. Additionally, a robust tool for predicting the most distinguishable phenophases using only free of charge data as predictors was created. Study period covers the years 2007-2014 and contains only quality-controlled dataset of 10 species and 14 phenophases. Phenological data used in this study originates from the manual observations network run by the Institute of Meteorology and Water Management - National Research Institute (IMGW-PIB). Three kind of data sources were used as predictors: (i) satellite derived products, (ii) preprocessed gridded meteorological data, and (iii) spatial properties (longitude, latitude, altitude) of the monitoring site. Moderate-Resolution Imaging Spectroradiometer (MODIS) level-3 vegetation products were used for detecting onset dates of particular phenophases. Following indices were used: Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Leaf Area Index (LAI), and Fraction of Photosynthetically Active Radiation (fPAR). Additionally, Interactive Multisensor Snow and Ice Mapping System (IMS) products were chosen to detect occurrence of snow cover. Due to highly noisy data, authors decided to take into account pixel reliability information. Besides satellite derived products (NDVI, EVI, FPAR, LAI, Snow cover), a wide group of observational data and agrometeorological indices derived from the European Climate Assessment & Dataset (ECA&D) were used as a potential predictors: cumulative growing degree days (GDD), cumulative growing precipitation days (GPD

  17. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  18. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  19. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    International Nuclear Information System (INIS)

    Hoffman, Adam J.; Lee, John C.

    2016-01-01

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  20. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  1. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  2. Impacts of supersymmetric higher derivative terms on inflation models in supergravity

    International Nuclear Information System (INIS)

    Aoki, Shuntaro; Yamada, Yusuke

    2015-01-01

    We show the effects of supersymmetric higher derivative terms on inflation models in supergravity. The results show that such terms generically modify the effective kinetic coefficient of the inflaton during inflation if the cut off scale of the higher derivative operators is sufficiently small. In such a case, the η-problem in supergravity does not occur, and we find that the effective potential of the inflaton generically becomes a power type potential with a power smaller than two

  3. Top-down NOX Emissions of European Cities Derived from Modelled and Spaceborne Tropospheric NO2 Columns

    Science.gov (United States)

    Verstraeten, W. W.; Boersma, K. F.; Douros, J.; Williams, J. E.; Eskes, H.; Delcloo, A. W.

    2017-12-01

    High nitrogen oxides (NOX = NO + NO2) concentrations near the surface impact humans and ecosystems badly and play a key role in tropospheric chemistry. NO2 is an important precursor of tropospheric ozone (O3) which in turn affects the production of the hydroxyl radical controlling the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. Combustion from industrial, traffic and household activities in large and densely populated urban areas result in high NOX emissions. Accurate mapping of these emissions is essential but hard to do since reported emissions factors may differ from real-time emissions in order of magnitude. Modelled NO2 levels and lifetimes also have large associated uncertainties and overestimation in the chemical lifetime which may mask missing NOX chemistry in current chemistry transport models (CTM's). The simultaneously estimation of both the NO2 lifetime and as well as the concentrations by applying the Exponentially Modified Gaussian (EMG) method on tropospheric NO2 columns lines densities should improve the surface NOX emission estimates. Here we evaluate if the EMG methodology applied on the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM can reproduce the NOX emissions used as model input. First we process both the modelled tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (averaged vertical wind speeds between surface and 500 m from ECMWF > 2 m s-1) as well as the accompanying OMI (Ozone Monitoring Instrument) data providing us with real-time observation-based estimates of midday NO2 columns. Then we compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the CTM as input to simulate the NO2 columns. For cities where NOX emissions can be assumed as originating from one large source good agreement is found between the top-down derived

  4. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  5. The Chandra Source Catalog 2.0: Spectral Properties

    Science.gov (United States)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  6. Differentiation and Transplantation of Embryonic Stem Cell-Derived Cone Photoreceptors into a Mouse Model of End-Stage Retinal Degeneration

    Directory of Open Access Journals (Sweden)

    Kamil Kruczek

    2017-06-01

    Full Text Available The loss of cone photoreceptors that mediate daylight vision represents a leading cause of blindness, for which cell replacement by transplantation offers a promising treatment strategy. Here, we characterize cone differentiation in retinas derived from mouse embryonic stem cells (mESCs. Similar to in vivo development, a temporal pattern of progenitor marker expression is followed by the differentiation of early thyroid hormone receptor β2-positive precursors and, subsequently, photoreceptors exhibiting cone-specific phototransduction-related proteins. We establish that stage-specific inhibition of the Notch pathway increases cone cell differentiation, while retinoic acid signaling regulates cone maturation, comparable with their actions in vivo. MESC-derived cones can be isolated in large numbers and transplanted into adult mouse eyes, showing capacity to survive and mature in the subretinal space of Aipl1−/− mice, a model of end-stage retinal degeneration. Together, this work identifies a robust, renewable cell source for cone replacement by purified cell suspension transplantation.

  7. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  8. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  9. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  10. Covariance matrices for nuclear cross sections derived from nuclear model calculations

    International Nuclear Information System (INIS)

    Smith, D. L.

    2005-01-01

    The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology

  11. Trimethylsilyl derivatives of organic compounds in source samples and in atmospheric fine particulate matter.

    Science.gov (United States)

    Nolte, Christopher G; Schauer, James J; Cass, Glen R; Simoneit, Bernd R T

    2002-10-15

    Source sample extracts of vegetative detritus, motor vehicle exhaust, tire dust paved road dust, and cigarette smoke have been silylated and analyzed by GC-MS to identify polar organic compounds that may serve as tracers for those specific emission sources of atmospheric fine particulate matter. Candidate molecular tracers were also identified in atmospheric fine particle samples collected in the San Joaquin Valley of California. A series of normal primary alkanols, dominated by even carbon-numbered homologues from C26 to C32, the secondary alcohol 10-nonacosanol, and some phytosterols are prominent polar compounds in the vegetative detritus source sample. No new polar organic compounds are found in the motor vehicle exhaust samples. Several hydrogenated resin acids are present in the tire dust sample, which might serve as useful tracers for those sources in areas that are heavily impacted by motor vehicle traffic. Finally, the alcohol and sterol emission profiles developed for all the source samples examined in this project are scaled according to the ambient fine particle mass concentrations attributed to those sources by a chemical mass balance receptor model that was previously applied to the San Joaquin Valley to compute the predicted atmospheric concentrations of individual alcohols and sterols. The resulting underprediction of alkanol concentrations at the urban sites suggests that alkanols may be more sensitive tracers for natural background from vegetative emissions (i.e., waxes) than the high molecular weight alkanes, which have been the best previously available tracers for that source.

  12. Combining sediment fingerprinting and a conceptual model for erosion and sediment transfer to explore sediment sources in an Alpine catchment

    Science.gov (United States)

    Costa, A.; Stutenbecker, L.; Anghileri, D.; Bakker, M.; Lane, S. N.; Molnar, P.; Schlunegger, F.

    2017-12-01

    In Alpine basins, sediment production and transfer is increasingly affected by climate change and human activities, specifically hydropower exploitation. Changes in sediment sources and pathways significantly influence basin management, biodiversity and landscape evolution. We explore the dynamics of sediment sources in a partially glaciated and highly regulated Alpine basin, the Borgne basin, by combining geochemical fingerprinting with the modelling of erosion and sediment transfer. The Borgne basin in southwest Switzerland is composed of three main litho-tectonic units, which we characterised following a tributary-sampling approach from lithologically characteristic sub-basins. We analysed bulk geochemistry using lithium borate fusion coupled with ICP-ES, and we used it to discriminate the three lithologic sources using statistical methods. Finally, we applied a mixing model to estimate the relative contributions of the three sources to the sediment sampled at the outlet. We combine results of the sediment fingerprinting with simulations of a spatially distributed conceptual model for erosion and transport of fine sediment. The model expresses sediment erosion by differentiating the contributions of erosional processes driven by erosive rainfall, snowmelt, and icemelt. Soil erodibility is accounted for as function of land-use and sediment fluxes are linearly convoluted to the outlet by sediment transfer rates for hillslope and river cells, which are a function of sediment connectivity. Sediment connectivity is estimated on the basis of topographic-hydraulic connectivity, flow duration associated with hydropower flow abstraction and permanent storage in hydropower reservoirs. Sediment fingerprinting at the outlet of the Borgne shows a consistent dominance (68-89%) of material derived from the uppermost, highly glaciated reaches, while contributions of the lower part (10-25%) and middle part (1-16%), where rainfall erosion is predominant, are minor. This result is

  13. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  14. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  15. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  16. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south of Delhi, which reach a maximum of 200 μg/m 3 during winter. Unlike PPM, SIA concentrations from different sources are more heterogeneous. High SIA concentrations (∼25 μg/m 3 ) at south Delhi and central Uttar Pradesh were mainly attributed to energy, industry and residential sectors. Agriculture is more important for SIA than PPM and contributions of on-road and open burning to SIA are also higher than to PPM. Residential sector contributes highest to total PM 2.5 (∼80 μg/m 3 ), followed by industry (∼70 μg/m 3 ) in North India. Energy and agriculture contribute ∼25 μg/m 3 and ∼16 μg/m 3 to total PM 2.5 , while SOA contributes <5 μg/m 3 . In Delhi, industry and residential activities contribute to 80% of total PM 2.5 . - Highlights: • Sources of PM 2.5 in North India were quantified by source-oriented CMAQ. • Industrial/residential activities are the dominating sources (60–70%) for PPM. • Energy/agriculture are the most important sources (30–40%) for SIA. • Strong seasonal

  17. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  18. Random source generating far field with elliptical flat-topped beam profile

    International Nuclear Information System (INIS)

    Zhang, Yongtao; Cai, Yangjian

    2014-01-01

    Circular and rectangular multi-Gaussian Schell-model (MGSM) sources which generate far fields with circular and rectangular flat-topped beam profiles were introduced just recently (Sahin and Korotkova 2012 Opt. Lett. 37 2970; Korotkova 2014 Opt. Lett. 39 64). In this paper, a random source named an elliptical MGSM source is introduced. An analytical expression for the propagation factor of an elliptical MGSM beam is derived. Furthermore, an analytical propagation formula for an elliptical MGSM beam passing through a stigmatic ABCD optical system is derived, and its propagation properties in free space are studied. It is interesting to find that an elliptical MGSM source generates a far field with an elliptical flat-topped beam profile, being qualitatively different from that of circular and rectangular MGSM sources. The ellipticity and the flatness of the elliptical flat-topped beam profile in the far field are determined by the initial coherence widths and the beam index, respectively. (paper)

  19. Modeling of a three-phase reactor for bitumen-derived gas oil hydrotreating

    International Nuclear Information System (INIS)

    Chacon, R.; Canale, A.; Bouza, A.; Sanchez, Y.

    2012-01-01

    A three-phase reactor model for describing the hydrotreating reactions of bitumen-derived gas oil was developed. The model incorporates the mass-transfer resistance at the gas-liquid and liquid-solid interfaces and a kinetic rate expression based on a Langmuir-Hinshelwood-type model. We derived three correlations for determining the solubility of hydrogen (H 2 ), hydrogen sulfide (H 2 S) and ammonia (NH 3 ) in hydrocarbon mixtures and the calculation of the catalyst effectiveness factor was included. Experimental data taken from the literature were used to determine the kinetic parameters (stoichiometric coefficients, reaction orders, reaction rate and adsorption constants for hydrodesulfuration (HDS) and hydrodenitrogenation (HDN)) and to validate the model under various operating conditions. Finally, we studied the effect of operating conditions such as pressure, temperature, LHSV, H 2 /feed ratio and the inhibiting effect of H 2 S on HDS and NH 3 on HDN. (author)

  20. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  1. Consistent modelling of wind turbine noise propagation from source to receiver.

    Science.gov (United States)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  2. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  3. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  4. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  5. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  6. OSGM02: A new model for converting GPS-derived heights to local height datums in Great Britain and Ireland

    DEFF Research Database (Denmark)

    Iliffe, J.C.; Ziebart, M.; Cross, P.A.

    2003-01-01

    The background to the recent computation of a new vertical datum model for the British Isles (OSGM02) is described After giving a brief description of the computational techniques and the data sets used for the derivation of the gravimetric geoid, the paper focuses on the fitting of this surface...... to the GPS and levelling networks in the various regions of the British Isles in such a way that it can be used in conjunction with GPS to form a replacement for the existing system of bench marks. The error sources induced in this procedure are discussed, and the theoretical basis given for the fitting...

  7. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  8. Aspects of the derivative coupling model in four dimensions

    International Nuclear Information System (INIS)

    Aste, Andreas

    2014-01-01

    A concise discussion of a 3 + 1-dimensional derivative coupling model, in which a massive Dirac field couples to the four-gradient of a massless scalar field, is given in order to elucidate the role of different concepts in quantum field theory like the regularization of quantum fields as operator-valued distributions, correlation distributions, locality, causality, and field operator gauge transformations. (orig.)

  9. Aspects of the derivative coupling model in four dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Aste, Andreas [University of Basel, Department of Physics, Basel (Switzerland); Paul Scherrer Institute, Villigen (Switzerland)

    2014-01-15

    A concise discussion of a 3 + 1-dimensional derivative coupling model, in which a massive Dirac field couples to the four-gradient of a massless scalar field, is given in order to elucidate the role of different concepts in quantum field theory like the regularization of quantum fields as operator-valued distributions, correlation distributions, locality, causality, and field operator gauge transformations. (orig.)

  10. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  11. First space-based derivation of the global atmospheric methanol emission fluxes

    Directory of Open Access Journals (Sweden)

    T. Stavrakou

    2011-05-01

    Full Text Available This study provides improved methanol emission estimates on the global scale, in particular for the largest methanol source, the terrestrial biosphere, and for biomass burning. To this purpose, one complete year of spaceborne measurements of tropospheric methanol columns retrieved for the first time by the thermal infrared sensor IASI aboard the MetOp satellite are compared with distributions calculated by the IMAGESv2 global chemistry-transport model. Two model simulations are performed using a priori biogenic methanol emissions either from the new MEGANv2.1 emission model, which is fully described in this work and is based on net ecosystem flux measurements, or from a previous parameterization based on net primary production by Jacob et al. (2005. A significantly better model performance in terms of both amplitude and seasonality is achieved through the use of MEGANv2.1 in most world regions, with respect to IASI data, and to surface- and air-based methanol measurements, even though important discrepancies over several regions are still present. As a second step of this study, we combine the MEGANv2.1 and the IASI column abundances over continents in an inverse modelling scheme based on the adjoint of the IMAGESv2 model to generate an improved global methanol emission source. The global optimized source totals 187 Tg yr−1 with a contribution of 100 Tg yr−1 from plants, only slightly lower than the a priori MEGANv2.1 value of 105 Tg yr−1. Large decreases with respect to the MEGANv2.1 biogenic source are inferred over Amazonia (up to 55 % and Indonesia (up to 58 %, whereas more moderate reductions are recorded in the Eastern US (20–25 % and Central Africa (25–35 %. On the other hand, the biogenic source is found to strongly increase in the arid and semi-arid regions of Central Asia (up to a factor of 5 and Western US (factor of 2, probably due to a source of methanol specific to these ecosystems which

  12. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  13. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  14. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  15. The MACHO Project HST Follow-Up: The Large Magellanic Cloud Microlensing Source Stars

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, C.A.; /LLNL, Livermore /UC, Berkeley; Drake, A.J.; /Caltech; Cook, K.H.; /LLNL, Livermore /UC, Berkeley; Bennett, D.P.; /Caltech /Notre Dame U.; Popowski, P.; /Garching, Max Planck Inst.; Dalal, N.; /Toronto U.; Nikolaev, S.; /LLNL, Livermore; Alcock, C.; /Caltech /Harvard-Smithsonian Ctr. Astrophys.; Axelrod, T.S.; /Arizona U.; Becker, A.C. /Washington U., Seattle; Freeman, K.C.; /Res. Sch. Astron. Astrophys., Weston Creek; Geha, M.; /Yale U.; Griest, K.; /UC, San Diego; Keller, S.C.; /LLNL, Livermore; Lehner, M.J.; /Harvard-Smithsonian Ctr. Astrophys. /Taipei, Inst. Astron. Astrophys.; Marshall, S.L.; /SLAC; Minniti, D.; /Rio de Janeiro, Pont. U. Catol. /Vatican Astron. Observ.; Pratt, M.R.; /Aradigm, Hayward; Quinn, P.J.; /Western Australia U.; Stubbs, C.W.; /UC, Berkeley /Harvard U.; Sutherland, W.; /Oxford U. /Oran, Sci. Tech. U. /Garching, Max Planck Inst. /McMaster U.

    2009-06-25

    We present Hubble Space Telescope (HST) WFPC2 photometry of 13 microlensed source stars from the 5.7 year Large Magellanic Cloud (LMC) survey conducted by the MACHO Project. The microlensing source stars are identified by deriving accurate centroids in the ground-based MACHO images using difference image analysis (DIA) and then transforming the DIA coordinates to the HST frame. None of these sources is coincident with a background galaxy, which rules out the possibility that the MACHO LMC microlensing sample is contaminated with misidentified supernovae or AGN in galaxies behind the LMC. This supports the conclusion that the MACHO LMC microlensing sample has only a small amount of contamination due to non-microlensing forms of variability. We compare the WFPC2 source star magnitudes with the lensed flux predictions derived from microlensing fits to the light curve data. In most cases the source star brightness is accurately predicted. Finally, we develop a statistic which constrains the location of the Large Magellanic Cloud (LMC) microlensing source stars with respect to the distributions of stars and dust in the LMC and compare this to the predictions of various models of LMC microlensing. This test excludes at {approx}> 90% confidence level models where more than 80% of the source stars lie behind the LMC. Exotic models that attempt to explain the excess LMC microlensing optical depth seen by MACHO with a population of background sources are disfavored or excluded by this test. Models in which most of the lenses reside in a halo or spheroid distribution associated with either the Milky Way or the LMC are consistent which these data, but LMC halo or spheroid models are favored by the combined MACHO and EROS microlensing results.

  16. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  17. A space-jump derivation for non-local models of cell-cell adhesion and non-local chemotaxis.

    Science.gov (United States)

    Buttenschön, Andreas; Hillen, Thomas; Gerisch, Alf; Painter, Kevin J

    2018-01-01

    Cellular adhesion provides one of the fundamental forms of biological interaction between cells and their surroundings, yet the continuum modelling of cellular adhesion has remained mathematically challenging. In 2006, Armstrong et al. proposed a mathematical model in the form of an integro-partial differential equation. Although successful in applications, a derivation from an underlying stochastic random walk has remained elusive. In this work we develop a framework by which non-local models can be derived from a space-jump process. We show how the notions of motility and a cell polarization vector can be naturally included. With this derivation we are able to include microscopic biological properties into the model. We show that particular choices yield the original Armstrong model, while others lead to more general models, including a doubly non-local adhesion model and non-local chemotaxis models. Finally, we use random walk simulations to confirm that the corresponding continuum model represents the mean field behaviour of the stochastic random walk.

  18. Variability in physical contamination assessment of source segregated biodegradable municipal waste derived composts.

    Science.gov (United States)

    Echavarri-Bravo, Virginia; Thygesen, Helene H; Aspray, Thomas J

    2017-01-01

    Physical contaminants (glass, metal, plastic and 'other') and stones were isolated and categorised from three finished commercial composts derived from source segregated biodegradable municipal waste (BMW). A subset of the identified physical contaminant fragments were subsequently reintroduced into the cleaned compost samples and sent to three commercial laboratories for testing in an inter-laboratory trial using the current PAS100:2011 method (AfOR MT PC&S). The trial showed that the 'other' category caused difficulty for all three laboratories with under reporting, particularly of the most common 'other' contaminants (paper and cardboard) and, over-reporting of non-man-made fragments. One laboratory underreported metal contaminant fragments (spiked as silver foil) in three samples. Glass, plastic and stones were variably underreported due to miss-classification or over reported due to contamination with compost (organic) fragments. The results are discussed in the context of global physical contaminant test methods and compost quality assurance schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Comparing pharmacophore models derived from crystallography and NMR ensembles

    Science.gov (United States)

    Ghanakota, Phani; Carlson, Heather A.

    2017-11-01

    NMR and X-ray crystallography are the two most widely used methods for determining protein structures. Our previous study examining NMR versus X-Ray sources of protein conformations showed improved performance with NMR structures when used in our Multiple Protein Structures (MPS) method for receptor-based pharmacophores (Damm, Carlson, J Am Chem Soc 129:8225-8235, 2007). However, that work was based on a single test case, HIV-1 protease, because of the rich data available for that system. New data for more systems are available now, which calls for further examination of the effect of different sources of protein conformations. The MPS technique was applied to Growth factor receptor bound protein 2 (Grb2), Src SH2 homology domain (Src-SH2), FK506-binding protein 1A (FKBP12), and Peroxisome proliferator-activated receptor-γ (PPAR-γ). Pharmacophore models from both crystal and NMR ensembles were able to discriminate between high-affinity, low-affinity, and decoy molecules. As we found in our original study, NMR models showed optimal performance when all elements were used. The crystal models had more pharmacophore elements compared to their NMR counterparts. The crystal-based models exhibited optimum performance only when pharmacophore elements were dropped. This supports our assertion that the higher flexibility in NMR ensembles helps focus the models on the most essential interactions with the protein. Our studies suggest that the "extra" pharmacophore elements seen at the periphery in X-ray models arise as a result of decreased protein flexibility and make very little contribution to model performance.

  20. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  1. Modeling of a three-phase reactor for bitumen-derived gas oil hydrotreating

    Energy Technology Data Exchange (ETDEWEB)

    Chacon, R.; Canale, A.; Bouza, A. [Departamento de Termodinamica y Fenomenos de Transporte. Universidad Simon Bolivar, Caracas (Venezuela, Bolivarian Republic of); Sanchez, Y. [Departamento de Procesos y Sistemas. Universidad Simon Bolivar (Venezuela, Bolivarian Republic of)

    2012-01-15

    A three-phase reactor model for describing the hydrotreating reactions of bitumen-derived gas oil was developed. The model incorporates the mass-transfer resistance at the gas-liquid and liquid-solid interfaces and a kinetic rate expression based on a Langmuir-Hinshelwood-type model. We derived three correlations for determining the solubility of hydrogen (H{sub 2}), hydrogen sulfide (H{sub 2}S) and ammonia (NH{sub 3}) in hydrocarbon mixtures and the calculation of the catalyst effectiveness factor was included. Experimental data taken from the literature were used to determine the kinetic parameters (stoichiometric coefficients, reaction orders, reaction rate and adsorption constants for hydrodesulfuration (HDS) and hydrodenitrogenation (HDN)) and to validate the model under various operating conditions. Finally, we studied the effect of operating conditions such as pressure, temperature, LHSV, H{sub 2}/feed ratio and the inhibiting effect of H{sub 2}S on HDS and NH{sub 3} on HDN. (author)

  2. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    Science.gov (United States)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last

  3. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  4. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  5. Deformation analysis of polymers composites: rheological model involving time-based fractional derivative

    DEFF Research Database (Denmark)

    Zhou, H. W.; Yi, H. Y.; Mishnaevsky, Leon

    2017-01-01

    A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped......A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog......-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model...... by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors....

  6. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  7. A simulation-based analytic model of radio galaxies

    Science.gov (United States)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  8. A model for the derivation of new transport limits for non-fixed contamination

    International Nuclear Information System (INIS)

    Thierfeldt, S.; Lorenz, B.; Hesse, J.

    2004-01-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination

  9. A model for the derivation of new transport limits for non-fixed contamination

    Energy Technology Data Exchange (ETDEWEB)

    Thierfeldt, S. [Brenk Systemplanung GmbH, Aachen (Germany); Lorenz, B. [GNS Gesellschaft fuer Nuklearservice, Essen (Germany); Hesse, J. [RWE Power AG, Essen (Germany)

    2004-07-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination.

  10. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  11. Deriving habitat models for northern long-eared bats from historical detection data: A case study using the Fernow Experimental Forest

    Science.gov (United States)

    Ford, W. Mark; Silvis, Alexander; Rodrigue, Jane L.; Kniowski, Andrew B.; Johnson, Joshua B.

    2016-01-01

    The listing of the northern long-eared bat (Myotis septentrionalis) as federally threatened under the Endangered Species Act following severe population declines from white-nose syndrome presents considerable challenges to natural resource managers. Because the northern long-eared bat is a forest habitat generalist, development of effective conservation measures will depend on appropriate understanding of its habitat relationships at individual locations. However, severely reduced population sizes make gathering data for such models difficult. As a result, historical data may be essential in development of habitat models. To date, there has been little evaluation of how effective historical bat presence data, such as data derived from mist-net captures, acoustic detection, and day-roost locations, may be in developing habitat models, nor is it clear how models created using different data sources may differ. We explored this issue by creating presence probability models for the northern long-eared bat on the Fernow Experimental Forest in the central Appalachian Mountains of West Virginia using a historical, presence-only data set. Each presence data type produced outputs that were dissimilar but that still corresponded with known traits of the northern long-eared bat or are easily explained in the context of the particular data collection protocol. However, our results also highlight potential limitations of individual data types. For example, models from mist-net capture data only showed high probability of presence along the dendritic network of riparian areas, an obvious artifact of sampling methodology. Development of ecological niche and presence models for northern long-eared bat populations could be highly valuable for resource managers going forward with this species. We caution, however, that efforts to create such models should consider the substantial limitations of models derived from historical data, and address model assumptions.

  12. Use of a probabilistic PBPK/PD model to calculate Data Derived Extrapolation Factors for chlorpyrifos.

    Science.gov (United States)

    Poet, Torka S; Timchalk, Charles; Bartels, Michael J; Smith, Jordan N; McDougal, Robin; Juberg, Daland R; Price, Paul S

    2017-06-01

    A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model combined with Monte Carlo analysis of inter-individual variation was used to assess the effects of the insecticide, chlorpyrifos and its active metabolite, chlorpyrifos oxon in humans. The PBPK/PD model has previously been validated and used to describe physiological changes in typical individuals as they grow from birth to adulthood. This model was updated to include physiological and metabolic changes that occur with pregnancy. The model was then used to assess the impact of inter-individual variability in physiology and biochemistry on predictions of internal dose metrics and quantitatively assess the impact of major sources of parameter uncertainty and biological diversity on the pharmacodynamics of red blood cell acetylcholinesterase inhibition. These metrics were determined in potentially sensitive populations of infants, adult women, pregnant women, and a combined population of adult men and women. The parameters primarily responsible for inter-individual variation in RBC acetylcholinesterase inhibition were related to metabolic clearance of CPF and CPF-oxon. Data Derived Extrapolation Factors that address intra-species physiology and biochemistry to replace uncertainty factors with quantitative differences in metrics were developed in these same populations. The DDEFs were less than 4 for all populations. These data and modeling approach will be useful in ongoing and future human health risk assessments for CPF and could be used for other chemicals with potential human exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Modeling chemotherapeutic neurotoxicity with human induced pluripotent stem cell-derived neuronal cells.

    Directory of Open Access Journals (Sweden)

    Heather E Wheeler

    Full Text Available There are no effective agents to prevent or treat chemotherapy-induced peripheral neuropathy (CIPN, the most common non-hematologic toxicity of chemotherapy. Therefore, we sought to evaluate the utility of human neuron-like cells derived from induced pluripotent stem cells (iPSCs as a means to study CIPN. We used high content imaging measurements of neurite outgrowth phenotypes to compare the changes that occur to iPSC-derived neuronal cells among drugs and among individuals in response to several classes of chemotherapeutics. Upon treatment of these neuronal cells with the neurotoxic drug paclitaxel, vincristine or cisplatin, we identified significant differences in five morphological phenotypes among drugs, including total outgrowth, mean/median/maximum process length, and mean outgrowth intensity (P < 0.05. The differences in damage among drugs reflect differences in their mechanisms of action and clinical CIPN manifestations. We show the potential of the model for gene perturbation studies by demonstrating decreased expression of TUBB2A results in significantly increased sensitivity of neurons to paclitaxel (0.23 ± 0.06 decrease in total neurite outgrowth, P = 0.011. The variance in several neurite outgrowth and apoptotic phenotypes upon treatment with one of the neurotoxic drugs is significantly greater between than within neurons derived from four different individuals (P < 0.05, demonstrating the potential of iPSC-derived neurons as a genetically diverse model for CIPN. The human neuron model will allow both for mechanistic studies of specific genes and genetic variants discovered in clinical studies and for screening of new drugs to prevent or treat CIPN.

  14. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  15. Application of air pollution dispersion modeling for source-contribution assessment and model performance evaluation at integrated industrial estate-Pantnagar

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, T., E-mail: tirthankaronline@gmail.com [Department of Environmental Science, G.B. Pant University of Agriculture and Technology, Pantnagar, U.S. Nagar, Uttarakhand 263 145 (India); Barman, S.C., E-mail: scbarman@yahoo.com [Department of Environmental Monitoring, Indian Institute of Toxicology Research, Post Box No. 80, Mahatma Gandhi Marg, Lucknow-226 001, Uttar Pradesh (India); Srivastava, R.K., E-mail: rajeevsrivastava08@gmail.com [Department of Environmental Science, G.B. Pant University of Agriculture and Technology, Pantnagar, U.S. Nagar, Uttarakhand 263 145 (India)

    2011-04-15

    Source-contribution assessment of ambient NO{sub 2} concentration was performed at Pantnagar, India through simulation of two urban mathematical dispersive models namely Gaussian Finite Line Source Model (GFLSM) and Industrial Source Complex Model (ISCST-3) and model performances were evaluated. Principal approaches were development of comprehensive emission inventory, monitoring of traffic density and regional air quality and conclusively simulation of urban dispersive models. Initially, 18 industries were found responsible for emission of 39.11 kg/h of NO{sub 2} through 43 elevated stacks. Further, vehicular emission potential in terms of NO{sub 2} was computed as 7.1 kg/h. Air quality monitoring delineates an annual average NO{sub 2} concentration of 32.6 {mu}g/m{sup 3}. Finally, GFLSM and ISCST-3 were simulated in conjunction with developed emission inventories and existing meteorological conditions. Models simulation indicated that contribution of NO{sub 2} from industrial and vehicular source was in a range of 45-70% and 9-39%, respectively. Further, statistical analysis revealed satisfactory model performance with an aggregate accuracy of 61.9%. - Research highlights: > Application of dispersion modeling for source-contribution assessment of ambient NO{sub 2}. > Inventorization revealed emission from industry and vehicles was 39.11 and 7.1 kg/h. > GFLSM revealed that vehicular pollution contributes a range of 9.0-38.6%. > Source-contribution of 45-70% was found for industrial emission through ISCST-3. > Aggregate performance of both models shows good agreement with an accuracy of 61.9%. - Development of industrial and vehicular inventory in terms of ambient NO{sub 2} for model simulation at Pantnagar, India and model validation revealed satisfactory outcome.

  16. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  17. Intermediate modeling between kinetic equations and hydrodynamic limits: derivation, analysis and simulations

    International Nuclear Information System (INIS)

    Parisot, M.

    2011-01-01

    This work is dedicated study of a problem resulting from plasma physics: the thermal transfer of electrons in a plasma close to equilibrium Maxwellian. Firstly, a dimensional study of the Vlasov-Fokker-Planck-Maxwell system is performed, allowing one hand to identify a physically relevant parameter of scale and also to define mathematically the contours of validity domain. The asymptotic regime called Spitzer-Harm is studied for a relatively general class of collision operator. The following part of this work is devoted to the derivation and study of the hydrodynamic limit of the system of Vlasov-Maxwell-Landau outside the strictly asymptotic. A model proposed by Schurtz and Nicolais located in this context and analyzed. The particularity of this model lies in the application of a delocalization operation in the heat flux. The link with non-local models of Luciani and Mora is established as well as mathematics properties as the principle of maximum and entropy dissipation. Then a formal derivation from the Vlasov equations with a simplified collision operator, is proposed. The derivation, inspired by the recent work of D. Levermore, involves decomposition methods according to the spherical harmonics and methods of closing called diffusion methods. A hierarchy of intermediate models between the kinetic equations and the hydrodynamic limit is described. In particular a new hydrodynamic system integro-differential by nature, is proposed. The Schurtz and Nicolai model appears as a simplification of the system resulting from the derivation, assuming a steady flow of heat. The above results are then generalized to account for the internal energy dependence which appears naturally in the equation establishment. The existence and uniqueness of the solution of the nonstationary system are established in a simplified framework. The last part is devoted was the implementation of a specific numerical scheme to solve these models. We propose a finite volume approach can be

  18. Micro-seismic imaging using a source function independent full waveform inversion method

    Science.gov (United States)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  19. Micro-seismic imaging using a source function independent full waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2018-03-26

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  20. The Effects of Ambient Conditions on Helicopter Rotor Source Noise Modeling

    Science.gov (United States)

    Schmitz, Frederic H.; Greenwood, Eric

    2011-01-01

    A new physics-based method called Fundamental Rotorcraft Acoustic Modeling from Experiments (FRAME) is used to demonstrate the change in rotor harmonic noise of a helicopter operating at different ambient conditions. FRAME is based upon a non-dimensional representation of the governing acoustic and performance equations of a single rotor helicopter. Measured external noise is used together with parameter identification techniques to develop a model of helicopter external noise that is a hybrid between theory and experiment. The FRAME method is used to evaluate the main rotor harmonic noise of a Bell 206B3 helicopter operating at different altitudes. The variation with altitude of Blade-Vortex Interaction (BVI) noise, known to be a strong function of the helicopter s advance ratio, is dependent upon which definition of airspeed is flown by the pilot. If normal flight procedures are followed and indicated airspeed (IAS) is held constant, the true airspeed (TAS) of the helicopter increases with altitude. This causes an increase in advance ratio and a decrease in the speed of sound which results in large changes to BVI noise levels. Results also show that thickness noise on this helicopter becomes more intense at high altitudes where advancing tip Mach number increases because the speed of sound is decreasing and advance ratio increasing for the same indicated airspeed. These results suggest that existing measurement-based empirically derived helicopter rotor noise source models may give incorrect noise estimates when they are used at conditions where data were not measured and may need to be corrected for mission land-use planning purposes.

  1. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  2. A Derivation of Source-based Kinetics Equation with Time Dependent Fission Kernel for Reactor Transient Analyses

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Woo, Myeong Hyun; Shin, Chang Ho; Pyeon, Cheol Ho

    2015-01-01

    In this study, a new balance equation to overcome the problems generated by the previous methods is proposed using source-based balance equation. And then, a simple problem is analyzed with the proposed method. In this study, a source-based balance equation with the time dependent fission kernel was derived to simplify the kinetics equation. To analyze the partial variations of reactor characteristics, two representative methods were introduced in previous studies; (1) quasi-statics method and (2) multipoint technique. The main idea of quasistatics method is to use a low-order approximation for large integration times. To realize the quasi-statics method, first, time dependent flux is separated into the shape and amplitude functions, and shape function is calculated. It is noted that the method has a good accuracy; however, it can be expensive as a calculation cost aspect because the shape function should be fully recalculated to obtain accurate results. To improve the calculation efficiency, multipoint method was proposed. The multipoint method is based on the classic kinetics equation with using Green's function to analyze the flight probability from region r' to r. Those previous methods have been used to analyze the reactor kinetics analysis; however, the previous methods can have some limitations. First, three group variables (r g , E g , t g ) should be considered to solve the time dependent balance equation. This leads a big limitation to apply large system problem with good accuracy. Second, the energy group neutrons should be used to analyze reactor kinetics problems. In time dependent problem, neutron energy distribution can be changed at different time. It can affect the change of the group cross section; therefore, it can lead the accuracy problem. Third, the neutrons in a space-time region continually affect the other space-time regions; however, it is not properly considered in the previous method. Using birth history of the neutron sources

  3. Derivation of the mean annual water-energy balance model based on an Ohms-type law

    Science.gov (United States)

    Li, X.; Shan, X.; Yang, H.

    2017-12-01

    The Budyko Hypothesis is used to describe the water partition and energy partition. Many empirical and analytical solutions have been proposed to evaluate the general solution which can be described as E/P = F(E0/P, c), where c is a parameter. And previous studies have given a derivation of Mezentsev-Choudhruy-Yang (MCY) model, based on dimensional analysis and mathematic reasoning, however, little hydrological process. Thus further hydrological meaning is limited to the boundary conditions which are difficult to explore. Note that hydrologic cycle is always forced by the energy conversions and atmospheric transportation, and the parallel in the electric circuits and the atmospheric motions, therefore we try to give a new derivation of MCY model from a conceptual model, considering hydrologic fluxes and atmospheric motions. Here an analogy of Ohms Law and the atmospheric cycle is used to aim at describing the partition of water in a long-term timescale. Then MCY model is derived in a new form, which is based on more physical explanation than mathematic reasoning proposed in previous studies. The implications of this derivation are also explored.

  4. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  5. A new visco-elasto-plastic model via time-space fractional derivative

    Science.gov (United States)

    Hei, X.; Chen, W.; Pang, G.; Xiao, R.; Zhang, C.

    2018-02-01

    To characterize the visco-elasto-plastic behavior of metals and alloys we propose a new constitutive equation based on a time-space fractional derivative. The rheological representative of the model can be analogous to that of the Bingham-Maxwell model, while the dashpot element and sliding friction element are replaced by the corresponding fractional elements. The model is applied to describe the constant strain rate, stress relaxation and creep tests of different metals and alloys. The results suggest that the proposed simple model can describe the main characteristics of the experimental observations. More importantly, the model can also provide more accurate predictions than the classic Bingham-Maxwell model and the Bingham-Norton model.

  6. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  7. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  8. Environmental Indicator Principium with Case References to Agricultural Soil, Water, and Air Quality and Model-Derived Indicators.

    Science.gov (United States)

    Zhang, T Q; Zheng, Z M; Lal, R; Lin, Z Q; Sharpley, A N; Shober, A L; Smith, D; Tan, C S; Van Cappellen, P

    2018-03-01

    Environmental indicators are powerful tools for tracking environmental changes, measuring environmental performance, and informing policymakers. Many diverse environmental indicators, including agricultural environmental indicators, are currently in use or being developed. This special collection of technical papers expands on the peer-reviewed literature on environmental indicators and their application to important current issues in the following areas: (i) model-derived indicators to indicate phosphorus losses from arable land to surface runoff and subsurface drainage, (ii) glutathione-ascorbate cycle-related antioxidants as early-warning bioindicators of polybrominated diphenyl ether toxicity in mangroves, and (iii) assessing the effectiveness of using organic matrix biobeds to limit herbicide dissipation from agricultural fields, thereby controlling on-farm point-source pollution. This introductory review also provides an overview of environmental indicators, mainly for agriculture, with examples related to the quality of the agricultural soil-water-air continuum and the application of model-derived indicators. Current knowledge gaps and future lines of investigation are also discussed. It appears that environmental indicators, particularly those for agriculture, work efficiently at the field, catchment, and local scales and serve as valuable metrics of system functioning and response; however, these indicators need to be refined or further developed to comprehensively meet community expectations in terms of providing a consistent picture of relevant issues and/or allowing comparisons to be made nationally or internationally. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  9. On butterfly effect in higher derivative gravities

    Energy Technology Data Exchange (ETDEWEB)

    Alishahiha, Mohsen [School of Physics, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid [School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-11-07

    We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.

  10. On butterfly effect in higher derivative gravities

    International Nuclear Information System (INIS)

    Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid

    2016-01-01

    We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.

  11. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  12. A spherical model for the transient x-ray source A0620-00

    International Nuclear Information System (INIS)

    Dilworth, C.; Maraschi, L.; Perola, G.C.

    1977-01-01

    The continuum spectrum of the transient X-ray source A0620-00, from infrared to X-ray frequencies, is interpreted as emission from a uniform spherical cloud of hot gas in which the free-free spectrum is modified by Thomson scattering. On this basis, the radius and the density of the cloud, and the distance of the source, are derived. The change of the spectrum with the time indicates a decrease of both radius and density with decreasing luminosity. Considering the production of X-rays to be due to impulsive accretion in a low-mass binary system, these results open the question as to whether the accreting object is a white dwarf rather than a neutron star. (author)

  13. Modelling and simulation of [18F]fluoromisonidazole dynamics based on histology-derived microvessel maps

    Science.gov (United States)

    Mönnich, David; Troost, Esther G. C.; Kaanders, Johannes H. A. M.; Oyen, Wim J. G.; Alber, Markus; Thorwarth, Daniela

    2011-04-01

    Hypoxia can be assessed non-invasively by positron emission tomography (PET) using radiotracers such as [18F]fluoromisonidazole (Fmiso) accumulating in poorly oxygenated cells. Typical features of dynamic Fmiso PET data are high signal variability in the first hour after tracer administration and slow formation of a consistent contrast. The purpose of this study is to investigate whether these characteristics can be explained by the current conception of the underlying microscopic processes and to identify fundamental effects. This is achieved by modelling and simulating tissue oxygenation and tracer dynamics on the microscopic scale. In simulations, vessel structures on histology-derived maps act as sources and sinks for oxygen as well as tracer molecules. Molecular distributions in the extravascular space are determined by reaction-diffusion equations, which are solved numerically using a two-dimensional finite element method. Simulated Fmiso time activity curves (TACs), though not directly comparable to PET TACs, reproduce major characteristics of clinical curves, indicating that the microscopic model and the parameter values are adequate. Evidence for dependence of the early PET signal on the vascular fraction is found. Further, possible effects leading to late contrast formation and potential implications on the quantification of Fmiso PET data are discussed.

  14. Model-based derivation, analysis and control of unstable microaerobic steady-states--considering Rhodospirillum rubrum as an example.

    Science.gov (United States)

    Carius, Lisa; Rumschinski, Philipp; Faulwasser, Timm; Flockerzi, Dietrich; Grammel, Hartmut; Findeisen, Rolf

    2014-04-01

    Microaerobic (oxygen-limited) conditions are critical for inducing many important microbial processes in industrial or environmental applications. At very low oxygen concentrations, however, the process performance often suffers from technical limitations. Available dissolved oxygen measurement techniques are not sensitive enough and thus control techniques, that can reliable handle these conditions, are lacking. Recently, we proposed a microaerobic process control strategy, which overcomes these restrictions and allows to assess different degrees of oxygen limitation in bioreactor batch cultivations. Here, we focus on the design of a control strategy for the automation of oxygen-limited continuous cultures using the microaerobic formation of photosynthetic membranes (PM) in Rhodospirillum rubrum as model phenomenon. We draw upon R. rubrum since the considered phenomenon depends on the optimal availability of mixed-carbon sources, hence on boundary conditions which make the process performance challenging. Empirically assessing these specific microaerobic conditions is scarcely practicable as such a process reacts highly sensitive to changes in the substrate composition and the oxygen availability in the culture broth. Therefore, we propose a model-based process control strategy which allows to stabilize steady-states of cultures grown under these conditions. As designing the appropriate strategy requires a detailed knowledge of the system behavior, we begin by deriving and validating an unstructured process model. This model is used to optimize the experimental conditions, and identify properties of the system which are critical for process performance. The derived model facilitates the good process performance via the proposed optimal control strategy. In summary the presented model-based control strategy allows to access and maintain microaerobic steady-states of interest and to precisely and efficiently transfer the culture from one stable microaerobic steady

  15. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  16. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    Science.gov (United States)

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  17. Neonatal Transplantation Confers Maturation of PSC-Derived Cardiomyocytes Conducive to Modeling Cardiomyopathy.

    Science.gov (United States)

    Cho, Gun-Sik; Lee, Dong I; Tampakakis, Emmanouil; Murphy, Sean; Andersen, Peter; Uosaki, Hideki; Chelko, Stephen; Chakir, Khalid; Hong, Ingie; Seo, Kinya; Chen, Huei-Sheng Vincent; Chen, Xiongwen; Basso, Cristina; Houser, Steven R; Tomaselli, Gordon F; O'Rourke, Brian; Judge, Daniel P; Kass, David A; Kwon, Chulan

    2017-01-10

    Pluripotent stem cells (PSCs) offer unprecedented opportunities for disease modeling and personalized medicine. However, PSC-derived cells exhibit fetal-like characteristics and remain immature in a dish. This has emerged as a major obstacle for their application for late-onset diseases. We previously showed that there is a neonatal arrest of long-term cultured PSC-derived cardiomyocytes (PSC-CMs). Here, we demonstrate that PSC-CMs mature into adult CMs when transplanted into neonatal hearts. PSC-CMs became similar to adult CMs in morphology, structure, and function within a month of transplantation into rats. The similarity was further supported by single-cell RNA-sequencing analysis. Moreover, this in vivo maturation allowed patient-derived PSC-CMs to reveal the disease phenotype of arrhythmogenic right ventricular cardiomyopathy, which manifests predominantly in adults. This study lays a foundation for understanding human CM maturation and pathogenesis and can be instrumental in PSC-based modeling of adult heart diseases. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  19. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette C. Rohr; Petros Koutrakis; John Godleski

    2011-03-31

    Determining the health impacts of different sources and components of fine particulate matter (PM2.5) is an important scientific goal, because PM is a complex mixture of both inorganic and organic constituents that likely differ in their potential to cause adverse health outcomes. The TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) study focused on two PM sources - coal-fired power plants and mobile sources - and sought to investigate the toxicological effects of exposure to realistic emissions from these sources. The DOE-EPRI Cooperative Agreement covered the performance and analysis of field experiments at three power plants. The mobile source component consisted of experiments conducted at a traffic tunnel in Boston; these activities were funded through the Harvard-EPA Particulate Matter Research Center and will be reported separately in the peer-reviewed literature. TERESA attempted to delineate health effects of primary particles, secondary (aged) particles, and mixtures of these with common atmospheric constituents. The study involved withdrawal of emissions directly from power plant stacks, followed by aging and atmospheric transformation of emissions in a mobile laboratory in a manner that simulated downwind power plant plume processing. Secondary organic aerosol (SOA) derived from the biogenic volatile organic compound {alpha}-pinene was added in some experiments, and in others ammonia was added to neutralize strong acidity. Specifically, four scenarios were studied at each plant: primary particles (P); secondary (oxidized) particles (PO); oxidized particles + secondary organic aerosol (SOA) (POS); and oxidized and neutralized particles + SOA (PONS). Extensive exposure characterization was carried out, including gas-phase and particulate species. Male Sprague Dawley rats were exposed for 6 hours to filtered air or different atmospheric mixtures. Toxicological endpoints included (1) breathing pattern; (2) bronchoalveolar lavage

  20. Controlled-source seismic interferometry with one way wave fields

    Science.gov (United States)

    van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.

    2008-12-01

    In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.

  1. Neonatal Transplantation Confers Maturation of PSC-Derived Cardiomyocytes Conducive to Modeling Cardiomyopathy

    OpenAIRE

    Cho, Gun-Sik; Lee, Dong I.; Tampakakis, Emmanouil; Murphy, Sean; Andersen, Peter; Uosaki, Hideki; Chelko, Stephen; Chakir, Khalid; Hong, Ingie; Seo, Kinya; Vincent Chen, Huei-Sheng; Chen, Xiongwen; Basso, Cristina; Houser, Steven R.; Tomaselli, Gordon F.

    2017-01-01

    Summary: Pluripotent stem cells (PSCs) offer unprecedented opportunities for disease modeling and personalized medicine. However, PSC-derived cells exhibit fetal-like characteristics and remain immature in a dish. This has emerged as a major obstacle for their application for late-onset diseases. We previously showed that there is a neonatal arrest of long-term cultured PSC-derived cardiomyocytes (PSC-CMs). Here, we demonstrate that PSC-CMs mature into adult CMs when transplanted into neonata...

  2. Precise troposphere delay model for Egypt, as derived from radiosonde data

    Directory of Open Access Journals (Sweden)

    M.A. Abdelfatah

    2015-06-01

    Real GPS data of six stations in 8-day period were used for the assessment of zenith part of PTD model against the available international models. These international models include Saastamoinen, Hopfield, and the local Egyptian dry model proposed by Mousa & El-Fiky. The data were processed using Bernese software version 5.0. The closure error results indicate that the PTD model is the best model in all session, but when the available radiosonde stations are less, the accuracy of PTD model is near to classic models. As radiosonde data for all ten stations are not available every session, it is recommended to use one of the regularization techniques for database to overcome missing data and derive consistent tropospheric delay information.

  3. Patient Derived Xenograft Models: An Emerging Platform for Translational Cancer Research

    Science.gov (United States)

    Hidalgo, Manuel; Amant, Frederic; Biankin, Andrew V.; Budinská, Eva; Byrne, Annette T.; Caldas, Carlos; Clarke, Robert B.; de Jong, Steven; Jonkers, Jos; Mælandsmo, Gunhild Mari; Roman-Roman, Sergio; Seoane, Joan; Trusolino, Livio; Villanueva, Alberto

    2014-01-01

    Recently, there has been increasing interest in the development and characterization of patient derived tumor xenograft (PDX) models for cancer research. PDX models mostly retain the principal histological and genetic characteristics of their donor tumor and remain stable across passages. These models have been shown to be predictive of clinical outcomes and are being used for preclinical drug evaluation, biomarker identification, biological studies, and personalized medicine strategies. This paper summarizes the current state of the art in this field including methodological issues, available collections, practical applications, challenges and shortcoming, and future directions, and introduces a European consortium of PDX models. PMID:25185190

  4. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  5. The role of multispectral scanners as data sources for EPA hydrologic models

    Science.gov (United States)

    Slack, R.; Hill, D.

    1982-01-01

    An estimated cost savings of 30% to 50% was realized from using LANDSAT-derived data as input into a program which simulates hydrologic and water quality processes in natural and man-made water systems. Data from the satellite were used in conjunction with EPA's 11-channel multispectral scanner to obtain maps for characterizing the distribution of turbidity plumes in Flathead Lake and to predict the effect of increasing urbanization in Montana's Flathead River Basin on the lake's trophic state. Multispectral data are also being studied as a possible source of the parameters needed to model the buffering capability of lakes in an effort to evaluate the effect of acid rain in the Adirondacks. Water quality in Lake Champlain, Vermont is being classified using data from the LANDSAT and the EPA MSS. Both contact-sensed and MSS data are being used with multivariate statistical analysis to classify the trophic status of 145 lakes in Illinois and to identify water sampling sites in Appalachicola Bay where contaminants threaten Florida's shellfish.

  6. Integrating hydrodynamic models and COSMO-SkyMed derived products for flood damage assessment

    Science.gov (United States)

    Giuffra, Flavio; Boni, Giorgio; Pulvirenti, Luca; Pierdicca, Nazzareno; Rudari, Roberto; Fiorini, Mattia

    2015-04-01

    Floods are the most frequent weather disasters in the world and probably the most costly in terms of social and economic losses. They may have a strong impact on infrastructures and health because the range of possible damages includes casualties, loss of housing and destruction of crops. Presently, the most common approach for remotely sensing floods is the use of synthetic aperture radar (SAR) images. Key features of SAR data for inundation mapping are the synoptic view, the capability to operate even in cloudy conditions and during both day and night time and the sensitivity of the microwave radiation to water. The launch of a new generation of instruments, such as TerraSAR-X and COSMO-SkyMed (CSK) allows producing near real time flood maps having a spatial resolution in the order of 1-5 m. Moreover, the present (CSK) and upcoming (Sentinel-1) constellations permit the acquisition of radar data characterized by a short revisit time (in the order of some hours for CSK), so that the production of frequent inundation maps can be envisaged. Nonetheless, gaps might be present in the SAR-derived flood maps because of the limited area imaged by SAR; moreover, the detection of floodwater may be complicated by the presence of very dense vegetation or urban settlements. Hence the need to complement SAR-derived flood maps with the outputs of physical models. Physical models allow delivering to end users very useful information for a complete flood damage assessment, such as data on water depths and flow directions, which cannot be directly derived from satellite remote sensing images. In addition, the flood extent predictions of hydraulic models can be compared to SAR-derived inundation maps to calibrate the models, or to fill the aforementioned gaps that can be present in the SAR-derived maps. Finally, physical models enable the construction of risk scenarios useful for emergency managers to take their decisions and for programming additional SAR acquisitions in order to

  7. Receptor models for source apportionment of remote aerosols in Brazil

    International Nuclear Information System (INIS)

    Artaxo Netto, P.E.

    1985-11-01

    The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt

  8. Soil-landscape modelling using fuzzy c-means clustering of attribute data derived from a Digital Elevation Model (DEM).

    NARCIS (Netherlands)

    Bruin, de S.; Stein, A.

    1998-01-01

    This study explores the use of fuzzy c-means clustering of attribute data derived from a digital elevation model to represent transition zones in the soil-landscape. The conventional geographic model used for soil-landscape description is not able to properly deal with these. Fuzzy c-means

  9. Estimation of biogenic emissions with satellite-derived land use and land cover data for air quality modeling of Houston-Galveston ozone nonattainment area.

    Science.gov (United States)

    Byun, Daewon W; Kim, Soontae; Czader, Beata; Nowak, David; Stetson, Stephen; Estes, Mark

    2005-06-01

    The Houston-Galveston Area (HGA) is one of the most severe ozone non-attainment regions in the US. To study the effectiveness of controlling anthropogenic emissions to mitigate regional ozone nonattainment problems, it is necessary to utilize adequate datasets describing the environmental conditions that influence the photochemical reactivity of the ambient atmosphere. Compared to the anthropogenic emissions from point and mobile sources, there are large uncertainties in the locations and amounts of biogenic emissions. For regional air quality modeling applications, biogenic emissions are not directly measured but are usually estimated with meteorological data such as photo-synthetically active solar radiation, surface temperature, land type, and vegetation database. In this paper, we characterize these meteorological input parameters and two different land use land cover datasets available for HGA: the conventional biogenic vegetation/land use data and satellite-derived high-resolution land cover data. We describe the procedures used for the estimation of biogenic emissions with the satellite derived land cover data and leaf mass density information. Air quality model simulations were performed using both the original and the new biogenic emissions estimates. The results showed that there were considerable uncertainties in biogenic emissions inputs. Subsequently, ozone predictions were affected up to 10 ppb, but the magnitudes and locations of peak ozone varied each day depending on the upwind or downwind positions of the biogenic emission sources relative to the anthropogenic NOx and VOC sources. Although the assessment had limitations such as heterogeneity in the spatial resolutions, the study highlighted the significance of biogenic emissions uncertainty on air quality predictions. However, the study did not allow extrapolation of the directional changes in air quality corresponding to the changes in LULC because the two datasets were based on vastly different

  10. The Analytical Repository Source-Term (AREST) model: Description and documentation

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs

  11. Characteristics and Source Apportionment of Marine Aerosols over East China Sea Using a Source-oriented Chemical Transport Model

    Science.gov (United States)

    Kang, M.; Zhang, H.; Fu, P.

    2017-12-01

    Marine aerosols exert a strong influence on global climate change and biogeochemical cycling, as oceans cover beyond 70% of the Earth's surface. However, investigations on marine aerosols are relatively limited at present due to the difficulty and inconvenience in sampling marine aerosols as well as their diverse sources. East China Sea (ECS), lying over the broad shelf of the western North Pacific, is adjacent to the Asian mainland, where continental-scale air pollution could impose a heavy load on the marine atmosphere through long-range atmospheric transport. Thus, contributions of major sources to marine aerosols need to be identified for policy makers to develop cost effective control strategies. In this work, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model, which can directly track the contributions from multiple emission sources to marine aerosols, is used to investigate the contributions from power, industry, transportation, residential, biogenic and biomass burning to marine aerosols over the ECS in May and June 2014. The model simulations indicate significant spatial and temporal variations of concentrations as well as the source contributions. This study demonstrates that the Asian continent can greatly affect the marine atmosphere through long-range transport.

  12. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  13. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  14. UV Stellar Distribution Model for the Derivation of Payload

    Directory of Open Access Journals (Sweden)

    Young-Jun Choi

    1999-12-01

    Full Text Available We present the results of a model calculation of the stellar distribution in a UV and centered at 2175Å corresponding to the well-known bump in the interstellar extinction curve. The stellar distribution model used here is based on the Bahcall-Soneira galaxy model (1980. The source code for model calculation was designed by Brosch (1991 and modified to investigate various designing factors for UV satellite payload. The model predicts UV stellar densities in different sky directions, and its results are compared with the TD-1 star counts for a number of sky regions. From this study, we can determine the field of view, size of optics, angular resolution, and number of stars in one orbit. There will provide the basic constrains in designing a satellite payload for UV observations.

  15. Kinetic parameters for source driven systems

    International Nuclear Information System (INIS)

    Dulla, S.; Ravetto, P.; Carta, M.; D'Angelo, A.

    2006-01-01

    The definition of the characteristic kinetic parameters of a subcritical source-driven system constitutes an interesting problem in reactor physics with important consequences for practical applications. Consistent and physically meaningful values of the parameters allow to obtain accurate results from kinetic simulation tools and to correctly interpret kinetic experiments. For subcritical systems a preliminary problem arises for the adoption of a suitable weighting function to be used in the projection procedure to derive a point model. The present work illustrates a consistent factorization-projection procedure which leads to the definition of the kinetic parameters in a straightforward manner. The reactivity term is introduced coherently with the generalized perturbation theory applied to the source multiplication factor ks, which is thus given a physical role in the kinetic model. The effective prompt lifetime is introduced on the assumption that a neutron generation can be initiated by both the fission process and the source emission. Results are presented for simplified configurations to fully comprehend the physical features and for a more complicated highly decoupled system treated in transport theory. (authors)

  16. Modelling [CAS - CERN Accelerator School, Ion Sources, Senec (Slovakia), 29 May - 8 June 2012

    International Nuclear Information System (INIS)

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H - -sources) together with some remarks on beam transport. (author)

  17. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  18. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  19. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Particle model of a cylindrical inductively coupled ion source

    Science.gov (United States)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  1. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  2. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  3. Monte Carlo model for a thick target T(D,n)4 He neutron source

    International Nuclear Information System (INIS)

    Webster, W.M.

    1976-01-01

    A brief description is given of a calculational model developed to simulate a T(D,n) 4 He neutron source which is anisotropic in energy and intensity. The model also provides a means for including the time dependency of the neutron source. Although the model has been applied specifically to the Lawrence Livermore Laboratory ICT accelerator, the technique is general and can be applied to any similar neutron source

  4. Cell sources for in vitro human liver cell culture models

    Science.gov (United States)

    Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny

    2016-01-01

    In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro. However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro. Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described. PMID:27385595

  5. Source rock

    Directory of Open Access Journals (Sweden)

    Abubakr F. Makky

    2014-03-01

    Full Text Available West Beni Suef Concession is located at the western part of Beni Suef Basin which is a relatively under-explored basin and lies about 150 km south of Cairo. The major goal of this study is to evaluate the source rock by using different techniques as Rock-Eval pyrolysis, Vitrinite reflectance (%Ro, and well log data of some Cretaceous sequences including Abu Roash (E, F and G members, Kharita and Betty formations. The BasinMod 1D program is used in this study to construct the burial history and calculate the levels of thermal maturity of the Fayoum-1X well based on calibration of measured %Ro and Tmax against calculated %Ro model. The calculated Total Organic Carbon (TOC content from well log data compared with the measured TOC from the Rock-Eval pyrolysis in Fayoum-1X well is shown to match against the shale source rock but gives high values against the limestone source rock. For that, a new model is derived from well log data to calculate accurately the TOC content against the limestone source rock in the study area. The organic matter existing in Abu Roash (F member is fair to excellent and capable of generating a significant amount of hydrocarbons (oil prone produced from (mixed type I/II kerogen. The generation potential of kerogen in Abu Roash (E and G members and Betty formations is ranging from poor to fair, and generating hydrocarbons of oil and gas prone (mixed type II/III kerogen. Eventually, kerogen (type III of Kharita Formation has poor to very good generation potential and mainly produces gas. Thermal maturation of the measured %Ro, calculated %Ro model, Tmax and Production index (PI indicates that Abu Roash (F member exciting in the onset of oil generation, whereas Abu Roash (E and G members, Kharita and Betty formations entered the peak of oil generation.

  6. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    Science.gov (United States)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  7. Electrical description of a magnetic pole enhanced inductively coupled plasma source: Refinement of the transformer model by reverse electromagnetic modeling

    International Nuclear Information System (INIS)

    Meziani, T.; Colpo, P.; Rossi, F.

    2006-01-01

    The magnetic pole enhanced inductively coupled source (MaPE-ICP) is an innovative low-pressure plasma source that allows for high plasma density and high plasma uniformity, as well as large-area plasma generation. This article presents an electrical characterization of this source, and the experimental measurements are compared to the results obtained after modeling the source by the equivalent circuit of the transformer. In particular, the method applied consists in performing a reverse electromagnetic modeling of the source by providing the measured plasma parameters such as plasma density and electron temperature as an input, and computing the total impedance seen at the primary of the transformer. The impedance results given by the model are compared to the experimental results. This approach allows for a more comprehensive refinement of the electrical model in order to obtain a better fitting of the results. The electrical characteristics of the system, and in particular the total impedance, were measured at the inductive coil antenna (primary of the transformer). The source was modeled electrically by a finite element method, treating the plasma as a conductive load and taking into account the complex plasma conductivity, the value of which was calculated from the electron density and electron temperature measurements carried out previously. The electrical characterization of the inductive excitation source itself versus frequency showed that the source cannot be treated as purely inductive and that the effect of parasitic capacitances must be taken into account in the model. Finally, considerations on the effect of the magnetic core addition on the capacitive component of the coupling are made

  8. PDEPTH—A computer program for the geophysical interpretation of magnetic and gravity profiles through Fourier filtering, source-depth analysis, and forward modeling

    Science.gov (United States)

    Phillips, Jeffrey D.

    2018-01-10

    PDEPTH is an interactive, graphical computer program used to construct interpreted geological source models for observed potential-field geophysical profile data. The current version of PDEPTH has been adapted to the Windows platform from an earlier DOS-based version. The input total-field magnetic anomaly and vertical gravity anomaly profiles can be filtered to produce derivative products such as reduced-to-pole magnetic profiles, pseudogravity profiles, pseudomagnetic profiles, and upward-or-downward-continued profiles. A variety of source-location methods can be applied to the original and filtered profiles to estimate (and display on a cross section) the locations and physical properties of contacts, sheet edges, horizontal line sources, point sources, and interface surfaces. Two-and-a-half-dimensional source bodies having polygonal cross sections can be constructed using a mouse and keyboard. These bodies can then be adjusted until the calculated gravity and magnetic fields of the source bodies are close to the observed profiles. Auxiliary information such as the topographic surface, bathymetric surface, seismic basement, and geologic contact locations can be displayed on the cross section using optional input files. Test data files, used to demonstrate the source location methods in the report, and several utility programs are included.

  9. Studies in higher-derivative gravitation

    International Nuclear Information System (INIS)

    Dutt, S.K.

    1987-01-01

    In this work two formulations of gravitation in which the action includes the second-derivatives of the metric in a non-trivial fashion are investigated. In the first part, the gauge theory of gravitation proposed by Yang in 1974 is investigated. The implications of coupling the pure space equations to matter sources via the action principle proposed by Yang is studied. It is shown that this action principle does not couple to matter sources in a satisfactory fashion. An earlier study by Fairchild along similar lines is critically examined. It is argued that Fairchild's action functional, and his objections to Yang's gauge approach to gravitation, arise from a not very meaningful analogy with the case of a general gauge field. Also, a conjecture originated in that work is refuted. A modification of Yang's action functional is provided which leads to both the Einstein and Yang field-equations. This system is shown to have non-trivial solutions in the presence of matter. An additional advantage is that the unphysical solutions of the pure space equations can be ruled out. It is shown that the joint system of Einstein and Yang field-equations leads to a physically viable cosmological model based on the Robertson-Walker metric, which satisfies both sets of field-equations. In the second part of this work, the Hamiltonian for pure gravity in Einstein's theory is obtained directly from the Hilbert Lagrangian. Since the Lagrangian depends upon the second-derivatives of the metric tensor, first the Hamiltonian formulation for a Lagrangian which may, in general depend upon the Nth-order time derivatives of the dynamical variables is developed

  10. A Semianalytical Solution of the Fractional Derivative Model and Its Application in Financial Market

    Directory of Open Access Journals (Sweden)

    Lina Song

    2018-01-01

    Full Text Available Fractional differential equation has been introduced to the financial theory, which presents new ideas and tools for the theoretical researches and the practical applications. In the work, an approximate semianalytical solution of the time-fractional European option pricing model is derived using the method of combining the enhanced technique of Adomian decomposition method with the finite difference method. And then the result is introduced in China’s financial market. The work makes every effort to test the feasibility of the fractional derivative model in the actual financial market.

  11. Anti-aging effects of vitamin C on human pluripotent stem cell-derived cardiomyocytes.

    Science.gov (United States)

    Kim, Yoon Young; Ku, Seung-Yup; Huh, Yul; Liu, Hung-Ching; Kim, Seok Hyun; Choi, Young Min; Moon, Shin Yong

    2013-10-01

    Human pluripotent stem cells (hPSCs) have arisen as a source of cells for biomedical research due to their developmental potential. Stem cells possess the promise of providing clinicians with novel treatments for disease as well as allowing researchers to generate human-specific cellular metabolism models. Aging is a natural process of living organisms, yet aging in human heart cells is difficult to study due to the ethical considerations regarding human experimentation as well as a current lack of alternative experimental models. hPSC-derived cardiomyocytes (CMs) bear a resemblance to human cardiac cells and thus hPSC-derived CMs are considered to be a viable alternative model to study human heart cell aging. In this study, we used hPSC-derived CMs as an in vitro aging model. We generated cardiomyocytes from hPSCs and demonstrated the process of aging in both human embryonic stem cell (hESC)- and induced pluripotent stem cell (hiPSC)-derived CMs. Aging in hESC-derived CMs correlated with reduced membrane potential in mitochondria, the accumulation of lipofuscin, a slower beating pattern, and the downregulation of human telomerase RNA (hTR) and cell cycle regulating genes. Interestingly, the expression of hTR in hiPSC-derived CMs was not significantly downregulated, unlike in hESC-derived CMs. In order to delay aging, vitamin C was added to the cultured CMs. When cells were treated with 100 μM of vitamin C for 48 h, anti-aging effects, specifically on the expression of telomere-related genes and their functionality in aging cells, were observed. Taken together, these results suggest that hPSC-derived CMs can be used as a unique human cardiomyocyte aging model in vitro and that vitamin C shows anti-aging effects in this model.

  12. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  13. Modeling a neutron rich nuclei source

    International Nuclear Information System (INIS)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J.; Mirea, M.

    2000-01-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (authors)

  14. Challenges for Knowledge Management in the Context of IT Global Sourcing Models Implementation

    OpenAIRE

    Perechuda , Kazimierz; Sobińska , Małgorzata

    2014-01-01

    Part 2: Models and Functioning of Knowledge Management; International audience; The article gives a literature overview of the current challenges connected with the implementation of the newest IT sourcing models. In the dynamic environment, organizations are required to build their competitive advantage not only on their own resources, but also on resources commissioned from external providers, accessed through various forms of sourcing, including the sourcing of IT services. This paper pres...

  15. An incentive-based source separation model for sustainable municipal solid waste management in China.

    Science.gov (United States)

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  16. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    Science.gov (United States)

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  17. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  18. Kalman-predictive-proportional-integral-derivative (KPPID)

    International Nuclear Information System (INIS)

    Fluerasu, A.; Sutton, M.

    2004-01-01

    With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.

  19. microRNA expression profile in human coronary smooth muscle cell-derived microparticles is a source of biomarkers.

    Science.gov (United States)

    de Gonzalo-Calvo, David; Cenarro, Ana; Civeira, Fernando; Llorente-Cortes, Vicenta

    2016-01-01

    microRNA (miRNA) expression profile of extracellular vesicles is a potential tool for clinical practice. Despite the key role of vascular smooth muscle cells (VSMC) in cardiovascular pathology, there is limited information about the presence of miRNAs in microparticles secreted by this cell type, including human coronary artery smooth muscle cells (HCASMC). Here, we tested whether HCASMC-derived microparticles contain miRNAs and the value of these miRNAs as biomarkers. HCASMC and explants from atherosclerotic or non-atherosclerotic areas were obtained from coronary arteries of patients undergoing heart transplant. Plasma samples were collected from: normocholesterolemic controls (N=12) and familial hypercholesterolemia (FH) patients (N=12). Both groups were strictly matched for age, sex and cardiovascular risk factors. Microparticle (0.1-1μm) isolation and characterization was performed using standard techniques. VSMC-enriched miRNAs expression (miR-21-5p, -143-3p, -145-5p, -221-3p and -222-3p) was analyzed using RT-qPCR. Total RNA isolated from HCASMC-derived microparticles contained small RNAs, including VSMC-enriched miRNAs. Exposition of HCASMC to pathophysiological conditions, such as hypercholesterolemia, induced a decrease in the expression level of miR-143-3p and miR-222-3p in microparticles, not in cells. Expression levels of miR-222-3p were lower in circulating microparticles from FH patients compared to normocholesterolemic controls. Microparticles derived from atherosclerotic plaque areas showed a decreased level of miR-143-3p and miR-222-3p compared to non-atherosclerotic areas. We demonstrated for the first time that microparticles secreted by HCASMC contain microRNAs. Hypercholesterolemia alters the microRNA profile of HCASMC-derived microparticles. The miRNA signature of HCASMC-derived microparticles is a source of cardiovascular biomarkers. Copyright © 2016 Sociedad Española de Arteriosclerosis. Publicado por Elsevier España, S.L.U. All rights

  20. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  1. Novel Thiazole Derivatives of Medicinal Potential: Synthesis and Modeling

    Directory of Open Access Journals (Sweden)

    Nour E. A. Abdel-Sattar

    2017-01-01

    Full Text Available This paper reports on the synthesis of new thiazole derivatives that could be profitably exploited in medical treatment of tumors. Molecular electronic structures have been modeled within density function theory (DFT framework. Reactivity indices obtained from the frontier orbital energies as well as electrostatic potential energy maps are discussed and correlated with the molecular structure. X-ray crystallographic data of one of the new compounds is measured and used to support and verify the theoretical results.

  2. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

    Science.gov (United States)

    Estabrook, Ryne

    2015-03-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

  3. Mathematical modeling of tetrahydroimidazole benzodiazepine-1-one derivatives as an anti HIV agent

    Science.gov (United States)

    Ojha, Lokendra Kumar

    2017-07-01

    The goal of the present work is the study of drug receptor interaction via QSAR (Quantitative Structure-Activity Relationship) analysis for 89 set of TIBO (Tetrahydroimidazole Benzodiazepine-1-one) derivatives. MLR (Multiple Linear Regression) method is utilized to generate predictive models of quantitative structure-activity relationships between a set of molecular descriptors and biological activity (IC50). The best QSAR model was selected having a correlation coefficient (r) of 0.9299 and Standard Error of Estimation (SEE) of 0.5022, Fisher Ratio (F) of 159.822 and Quality factor (Q) of 1.852. This model is statistically significant and strongly favours the substitution of sulphur atom, IS i.e. indicator parameter for -Z position of the TIBO derivatives. Two other parameter logP (octanol-water partition coefficient) and SAG (Surface Area Grid) also played a vital role in the generation of best QSAR model. All three descriptor shows very good stability towards data variation in leave-one-out (LOO).

  4. Use of the isolated problem approach for multi-compartment BEM models of electro-magnetic source imaging

    International Nuclear Information System (INIS)

    Gencer, Nevzat G; Akalin-Acar, Zeynep

    2005-01-01

    The isolated problem approach (IPA) is a method used in the boundary element method (BEM) to overcome numerical inaccuracies caused by the high-conductivity difference in the skull and the brain tissues in the head. Haemaelaeinen and Sarvas (1989 IEEE Trans. Biomed. Eng. 36 165-71) described how the source terms can be updated to overcome these inaccuracies for a three-layer head model. Meijs et al (1989 IEEE Trans. Biomed. Eng. 36 1038-49) derived the integral equations for the general case where there are an arbitrary number of layers inside the skull. However, the IPA is used in the literature only for three-layer head models. Studies that use complex boundary element head models that investigate the inhomogeneities in the brain or model the cerebrospinal fluid (CSF) do not make use of the IPA. In this study, the generalized formulation of the IPA for multi-layer models is presented in terms of integral equations. The discretized version of these equations are presented in two different forms. In a previous study (Akalin-Acar and Gencer 2004 Phys. Med. Biol. 49 5011-28), we derived formulations to calculate the electroencephalography and magnetoencephalography transfer matrices assuming a single layer in the skull. In this study, the transfer matrix formulations are updated to incorporate the generalized IPA. The effects of the IPA are investigated on the accuracy of spherical and realistic models when the CSF layer and a tumour tissue are included in the model. It is observed that, in the spherical model, for a radial dipole 1 mm close to the brain surface, the relative difference measure (RDM*) drops from 1.88 to 0.03 when IPA is used. For the realistic model, the inclusion of the CSF layer does not change the field pattern significantly. However, the inclusion of an inhomogeneity changes the field pattern by 25% for a dipole oriented towards the inhomogeneity. The effect of the IPA is also investigated when there is an inhomogeneity in the brain. In addition

  5. Theoretical and experimental determination of dosimetric characteristics for brachyseedTM Pd-103, model Pd-1, source

    International Nuclear Information System (INIS)

    Meigooni, A.S.; Zhang Hualin; Perry, Candace; Dini, S.A.; Koona, R.A.

    2003-01-01

    Dosimetric characteristics of the BrachySeed TM Pd-103, Model Pd-1 source have been determined using both theoretical and experimental methods. Dose rate constant, radial dose function, and anisotropy functions of the source have been obtained following the TG-43 recommendations. Derivation of the dose rate constant was based on recent NIST WAFAC calibration performed in accordance with their 1999 Standard. Measurements were performed in Solid Water TM using LiF TLD chips. Theoretical simulation calculations were performed in both Solid Water TM and water phantom materials using MCNP4C2 Monte Carlo code using DLC-200 interaction data. The results of the Monte Carlo simulation indicated a dose rate constant of 0.65 cGy h -1 U -1 and 0.61 cGy h -1 U -1 in water and Solid Water TM , respectively. The measured dose rate constant in Solid Water TM was found to be 0.63±7% cGy h -1 U -1 , which is within the experimental uncertainty of the Monte-Carlo simulated results. The anisotropy functions of the source were calculated in both water and in Solid Water TM at the radial distances of 1 to 7 cm. Measurements were made in Solid Water TM at distances of 2, 3, 5, and 7 cm. The Monte-Carlo calculated anisotropy constant of the new source was found to be 0.98 in water. The tabulated data and 5th order polynomial fit coefficients for the radial dose function along with the dose rate constant and anisotropy functions are provided to support clinical use of this source

  6. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    Science.gov (United States)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low

  7. Acoustic wavefield evolution as a function of source location perturbation

    KAUST Repository

    Alkhalifah, Tariq Ali

    2010-12-01

    The wavefield is typically simulated for seismic exploration applications through solving the wave equation for a specific seismic source location. The direct relation between the form (or shape) of the wavefield and the source location can provide insights useful for velocity estimation and interpolation. As a result, I derive partial differential equations that relate changes in the wavefield shape to perturbations in the source location, especially along the Earth\\'s surface. These partial differential equations have the same structure as the wave equation with a source function that depends on the background (original source) wavefield. The similarity in form implies that we can use familiar numerical methods to solve the perturbation equations, including finite difference and downward continuation. In fact, we can use the same Green\\'s function to solve the wave equation and its source perturbations by simply incorporating source functions derived from the background field. The solutions of the perturbation equations represent the coefficients of a Taylor\\'s series type expansion of the wavefield as a function of source location. As a result, we can speed up the wavefield calculation as we approximate the wavefield shape for sources in the vicinity of the original source. The new formula introduces changes to the background wavefield only in the presence of lateral velocity variation or in general terms velocity variations in the perturbation direction. The approach is demonstrated on the smoothed Marmousi model.

  8. Derivation of the Verlinde formula from Chern-Simons theory and the G/G model

    International Nuclear Information System (INIS)

    Blau, M.; Thompson, G.

    1993-01-01

    We give a derivation of the Verlinde formula for the G k WZW model from Chern-Simons theory, without taking recourse to CFT, by calculating explicitly the partition function Z ΣxS 1 of Σ x S 1 with an arbitrary number of labelled punctures. By what is essentially a suitable gauge choice, Z ΣxS 1 is reduced to the partition function of an abelian topological field theory on Σ (a deformation of non-abelian BF and Yang-Mills theory) whose evaluation is straightforward. This relates the Verlinde formula to the Ray-Singer torsion of Σ x S 1 . We derive the G k /G k model from Chern-Simons theory, proving their equivalence, and give an alternative derivation of the Verlinde formula by calculating the G k /G k path integral via a functional version of the Weyl integral formula. From this point of view the Verlinde formula arises from the corresponding jacobian, the Weyl determinant. Also, a novel derivation of the shift k → k + h is given, based on the index of the twisted Dolbeault complex. (orig.)

  9. A Modified Groundwater Flow Model Using the Space Time Riemann-Liouville Fractional Derivatives Approximation

    Directory of Open Access Journals (Sweden)

    Abdon Atangana

    2014-01-01

    Full Text Available The notion of uncertainty in groundwater hydrology is of great importance as it is known to result in misleading output when neglected or not properly accounted for. In this paper we examine this effect in groundwater flow models. To achieve this, we first introduce the uncertainties functions u as function of time and space. The function u accounts for the lack of knowledge or variability of the geological formations in which flow occur (aquifer in time and space. We next make use of Riemann-Liouville fractional derivatives that were introduced by Kobelev and Romano in 2000 and its approximation to modify the standard version of groundwater flow equation. Some properties of the modified Riemann-Liouville fractional derivative approximation are presented. The classical model for groundwater flow, in the case of density-independent flow in a uniform homogeneous aquifer is reformulated by replacing the classical derivative by the Riemann-Liouville fractional derivatives approximations. The modified equation is solved via the technique of green function and the variational iteration method.

  10. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  11. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    International Nuclear Information System (INIS)

    Y. Chen

    2001-01-01

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  12. Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data

    Science.gov (United States)

    Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.

    2014-01-01

    LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.

  13. Modeling the diurnal tide with dissipation derived from UARS/HRDI measurements

    Directory of Open Access Journals (Sweden)

    M. A. Geller

    1997-09-01

    Full Text Available This paper uses dissipation values derived from UARS/HRDI observations in a recently published diurnal-tide model. These model structures compare quite well with the UARS/HRDI observations with respect to the annual variation of the diurnal tidal amplitudes and the size of the amplitudes themselves. It is suggested that the annual variation of atmospheric dissipation in the mesosphere-lower thermosphere is a major controlling factor in determining the annual variation of the diurnal tide.

  14. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  15. A Remote Sensing-Derived Corn Yield Assessment Model

    Science.gov (United States)

    Shrestha, Ranjay Man

    be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield

  16. Mathematical modelling of electricity market with renewable energy sources

    International Nuclear Information System (INIS)

    Marchenko, O.V.

    2007-01-01

    The paper addresses the electricity market with conventional energy sources on fossil fuel and non-conventional renewable energy sources (RESs) with stochastic operating conditions. A mathematical model of long-run (accounting for development of generation capacities) equilibrium in the market is constructed. The problem of determining optimal parameters providing the maximum social criterion of efficiency is also formulated. The calculations performed have shown that the adequate choice of price cap, environmental tax, subsidies to RESs and consumption tax make it possible to take into account external effects (environmental damage) and to create incentives for investors to construct conventional and renewable energy sources in an optimal (from the society view point) mix. (author)

  17. Rate equation modelling of the optically pumped spin-exchange source

    International Nuclear Information System (INIS)

    Stenger, J.; Rith, K.

    1995-01-01

    Sources for spin polarized hydrogen or deuterium, polarized via spin-exchange of a laser optically pumped alkali metal, can be modelled by rate equations. The rate equations for this type of source, operated either with hydrogen or deuterium, are given explicitly with the intention of providing a useful tool for further source optimization and understanding. Laser optical pumping of alkali metal, spin-exchange collisions of hydrogen or deuterium atoms with each other and with alkali metal atoms are included, as well as depolarization due to flow and wall collisions. (orig.)

  18. Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2013-02-01

    We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a

  19. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    Science.gov (United States)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  20. Discovery of Antibiotics-derived Polymers for Gene Delivery using Combinatorial Synthesis and Cheminformatics Modeling

    Science.gov (United States)

    Potta, Thrimoorthy; Zhen, Zhuo; Grandhi, Taraka Sai Pavan; Christensen, Matthew D.; Ramos, James; Breneman, Curt M.; Rege, Kaushal

    2014-01-01

    We describe the combinatorial synthesis and cheminformatics modeling of aminoglycoside antibiotics-derived polymers for transgene delivery and expression. Fifty-six polymers were synthesized by polymerizing aminoglycosides with diglycidyl ether cross-linkers. Parallel screening resulted in identification of several lead polymers that resulted in high transgene expression levels in cells. The role of polymer physicochemical properties in determining efficacy of transgene expression was investigated using Quantitative Structure-Activity Relationship (QSAR) cheminformatics models based on Support Vector Regression (SVR) and ‘building block’ polymer structures. The QSAR model exhibited high predictive ability, and investigation of descriptors in the model, using molecular visualization and correlation plots, indicated that physicochemical attributes related to both, aminoglycosides and diglycidyl ethers facilitated transgene expression. This work synergistically combines combinatorial synthesis and parallel screening with cheminformatics-based QSAR models for discovery and physicochemical elucidation of effective antibiotics-derived polymers for transgene delivery in medicine and biotechnology. PMID:24331709

  1. Open source integrated modeling environment Delta Shell

    Science.gov (United States)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  2. Generation of functional islets from human umbilical cord and placenta derived mesenchymal stem cells.

    Science.gov (United States)

    Kadam, Sachin; Govindasamy, Vijayendran; Bhonde, Ramesh

    2012-01-01

    Bone marrow-derived mesenchymal stem cells (BM-MSCs) have been used for allogeneic application in tissue engineering but have certain drawbacks. Therefore, mesenchymal stem cells (MSCs) derived from other adult tissue sources have been considered as an alternative. The human umbilical cord and placenta are easily available noncontroversial sources of human tissue, which are often discarded as biological waste, and their collection is noninvasive. These sources of MSCs are not subjected to ethical constraints, as in the case of embryonic stem cells. MSCs derived from umbilical cord and placenta are multipotent and have the ability to differentiate into various cell types crossing the lineage boundary towards endodermal lineage. The aim of this chapter is to provide a detailed reproducible cookbook protocol for the isolation, propagation, characterization, and differentiation of MSCs derived from human umbilical cord and placenta with special reference to harnessing their potential towards pancreatic/islet lineage for utilization as a cell therapy product. We show here that mesenchymal stromal cells can be extensively expanded from umbilical cord and placenta of human origin retaining their multilineage differentiation potential in vitro. Our report indicates that postnatal tissues obtained as delivery waste represent a rich source of mesenchymal stromal cells, which can be differentiated into functional islets employing three-stage protocol developed by our group. These islets could be used as novel in vitro model for screening hypoglycemics/insulin secretagogues, thus reducing animal experimentation for this purpose and for the future human islet transplantation programs to treat diabetes.

  3. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  4. A Derivation of Source-based Kinetics Equation with Time Dependent Fission Kernel for Reactor Transient Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Woo, Myeong Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Pyeon, Cheol Ho [Kyoto University, Osaka (Japan)

    2015-10-15

    In this study, a new balance equation to overcome the problems generated by the previous methods is proposed using source-based balance equation. And then, a simple problem is analyzed with the proposed method. In this study, a source-based balance equation with the time dependent fission kernel was derived to simplify the kinetics equation. To analyze the partial variations of reactor characteristics, two representative methods were introduced in previous studies; (1) quasi-statics method and (2) multipoint technique. The main idea of quasistatics method is to use a low-order approximation for large integration times. To realize the quasi-statics method, first, time dependent flux is separated into the shape and amplitude functions, and shape function is calculated. It is noted that the method has a good accuracy; however, it can be expensive as a calculation cost aspect because the shape function should be fully recalculated to obtain accurate results. To improve the calculation efficiency, multipoint method was proposed. The multipoint method is based on the classic kinetics equation with using Green's function to analyze the flight probability from region r' to r. Those previous methods have been used to analyze the reactor kinetics analysis; however, the previous methods can have some limitations. First, three group variables (r{sub g}, E{sub g}, t{sub g}) should be considered to solve the time dependent balance equation. This leads a big limitation to apply large system problem with good accuracy. Second, the energy group neutrons should be used to analyze reactor kinetics problems. In time dependent problem, neutron energy distribution can be changed at different time. It can affect the change of the group cross section; therefore, it can lead the accuracy problem. Third, the neutrons in a space-time region continually affect the other space-time regions; however, it is not properly considered in the previous method. Using birth history of the

  5. Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling

    Science.gov (United States)

    Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A. T.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B. F.

    2015-02-01

    Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 state implementation plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-decoupled direct method (DDM) model to adjust Texas NOx emissions using a high-resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCDs) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The region-based DKF inversion suggests increasing NOx emissions by 10-50% in most regions, deteriorating the model performance in predicting ground NO2 and O3, while the sector-based DKF inversion tends to scale down area and nonroad NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using sector-based inversion-constrained NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05, increases the model

  6. Spatial and frequency domain ring source models for the single muscle fiber action potential

    DEFF Research Database (Denmark)

    Henneberg, Kaj-åge; R., Plonsey

    1994-01-01

    In the paper, single-fibre models for the extracellular action potential are developed that will allow the potential to the evaluated at an arbitrary field point in the extracellular space. Fourier-domain models are restricted in that they evaluate potentials at equidistant points along a line...... parallel to the fibre axis. Consequently, they cannot easily evaluate the potential at the boundary nodes of a boundary-element electrode model. The Fourier-domain models employ axial-symmetric ring source models, and thereby provide higher accuracy that the line source model, where the source is lumped...... including anisotropy show that the spatial models require extreme care in the integration procedure owing to the singularity in the weighting functions. With adequate sampling, the spatial models can evaluate extracellular potentials with high accuracy....

  7. Source apportionment and heavy metal health risk (HMHR) quantification from sources in a southern city in China, using an ME2-HMHR model.

    Science.gov (United States)

    Peng, Xing; Shi, GuoLiang; Liu, GuiRong; Xu, Jiao; Tian, YingZe; Zhang, YuFen; Feng, YinChang; Russell, Armistead G

    2017-02-01

    Heavy metals (Cr, Co, Ni, As, Cd, and Pb) can be bound to PM adversely affecting human health. Quantifying the source impacts on heavy metals can provide source-specific estimates of the heavy metal health risk (HMHR) to guide effective development of strategies to reduce such risks from exposure to heavy metals in PM 2.5 (particulate matter (PM) with aerodynamic diameter less than or equal to 2.5 μm). In this study, a method combining Multilinear Engine 2 (ME2) and a risk assessment model is developed to more effectively quantify source contributions to HMHR, including heavy metal non-cancer risk (non-HMCR) and cancer risk (HMCR). The combined model (called ME2-HMHR) has two steps: step1, source contributions to heavy metals are estimated by employing the ME2 model; step2, the source contributions in step 1 are introduced into the risk assessment model to calculate the source contributions to HMHR. The approach was applied to Huzou, China and five significant sources were identified. Soil dust is the largest source of non-HMCR. For HMCR, the source contributions of soil dust, coal combustion, cement dust, vehicle, and secondary sources are 1.0 × 10 -4 , 3.7 × 10 -5 , 2.7 × 10 -6 , 1.6 × 10 -6 and 1.9 × 10 -9 , respectively. The soil dust is the largest contributor to HMCR, being driven by the high impact of soil dust on PM 2.5 and the abundance of heavy metals in soil dust. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2006-03-01

    TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) involves exposing laboratory rats to realistic coal-fired power plant and mobile source emissions to help determine the relative toxicity of these PM sources. There are three coal-fired power plants in the TERESA program; this report describes the results of fieldwork conducted at the first plant, located in the Upper Midwest. The project was technically challenging by virtue of its novel design and requirement for the development of new techniques. By examining aged, atmospherically transformed aerosol derived from power plant stack emissions, we were able to evaluate the toxicity of PM derived from coal combustion in a manner that more accurately reflects the exposure of concern than existing methodologies. TERESA also involves assessment of actual plant emissions in a field setting--an important strength since it reduces the question of representativeness of emissions. A sampling system was developed and assembled to draw emissions from the stack; stack sampling conducted according to standard EPA protocol suggested that the sampled emissions are representative of those exiting the stack into the atmosphere. Two mobile laboratories were then outfitted for the study: (1) a chemical laboratory in which the atmospheric aging was conducted and which housed the bulk of the analytical equipment; and (2) a toxicological laboratory, which contained animal caging and the exposure apparatus. Animal exposures were carried out from May-November 2004 to a number of simulated atmospheric scenarios. Toxicological endpoints included (1) pulmonary function and breathing pattern; (2) bronchoalveolar lavage fluid cytological and biochemical analyses; (3) blood cytological analyses; (4) in vivo oxidative stress in heart and lung tissue; and (5) heart and lung histopathology. Results indicated no differences between exposed and control animals in any of the endpoints examined. Exposure concentrations for the

  9. Neural assembly models derived through nano-scale measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Hongyou; Branda, Catherine; Schiek, Richard Louis; Warrender, Christina E.; Forsythe, James Chris

    2009-09-01

    This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.

  10. Source terms derived from analyses of hypothetical accidents, 1950-1986

    International Nuclear Information System (INIS)

    Stratton, W.R.

    1987-01-01

    This paper reviews the history of reactor accident source term assumptions. After the Three Mile Island accident, a number of theoretical and experimental studies re-examined possible accident sequences and source terms. Some of these results are summarized in this paper

  11. Applying inversion techniques to derive source currents and geoelectric fields for geomagnetically induced current calculations

    Directory of Open Access Journals (Sweden)

    J. S. de Villiers

    2014-10-01

    Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.

  12. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  13. Multi-factor energy price models and exotic derivatives pricing

    Science.gov (United States)

    Hikspoors, Samuel

    The high pace at which many of the world's energy markets have gradually been opened to competition have generated a significant amount of new financial activity. Both academicians and practitioners alike recently started to develop the tools of energy derivatives pricing/hedging as a quantitative topic of its own. The energy contract structures as well as their underlying asset properties set the energy risk management industry apart from its more standard equity and fixed income counterparts. This thesis naturally contributes to these broad market developments in participating to the advances of the mathematical tools aiming at a better theory of energy contingent claim pricing/hedging. We propose many realistic two-factor and three-factor models for spot and forward price processes that generalize some well known and standard modeling assumptions. We develop the associated pricing methodologies and propose stable calibration algorithms that motivate the application of the relevant modeling schemes.

  14. Full–waveform inversion using the excitation representation of the source wavefield

    KAUST Repository

    Kalita, Mahesh

    2016-09-06

    Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high resolution recovery of the unknown model parameters. However, it is a cumbersome process, requiring a long computational time and large memory space/disc storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated source wavefield with the backward propagated residuals, in which we usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modeling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a 2-layer model with an anomaly and the Marmousi II model.

  15. Full–waveform inversion using the excitation representation of the source wavefield

    KAUST Repository

    Kalita, Mahesh; Alkhalifah, Tariq Ali

    2016-01-01

    Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high resolution recovery of the unknown model parameters. However, it is a cumbersome process, requiring a long computational time and large memory space/disc storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated source wavefield with the backward propagated residuals, in which we usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modeling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a 2-layer model with an anomaly and the Marmousi II model.

  16. Energy consumption modeling of air source electric heat pump water heaters

    International Nuclear Information System (INIS)

    Bourke, Grant; Bansal, Pradeep

    2010-01-01

    Electric heat pump air source water heaters may provide an opportunity for significant improvements in residential water heater energy efficiency in countries with temperate climates. As the performance of these appliances can vary widely, it is important for consumers to be able to accurately assess product performance in their application to maximise energy savings and ensure uptake of this technology. For a given ambient temperature and humidity, the performance of an air source heat pump water heater is strongly correlated to the water temperature in or surrounding the condenser. It is therefore important that energy consumption models for these products duplicate the real-world water temperatures applied to the heat pump condenser. This paper examines a recently published joint Australian and New Zealand Standard, AS/NZS 4234: 2008; Heated water systems - Calculation of energy consumption. Using this standard a series TRNSYS models were run for several split type air source electric heat pump water heaters. An equivalent set of models was then run utilizing an alternative water use pattern. Unfavorable errors of up to 12% were shown to occur in modeling of heat pump water heater performance using the current standard compared to the alternative regime. The difference in performance of a model using varying water use regimes can be greater than the performance difference between models of product.

  17. Monte Carlo investigation of the dosimetric properties of the new 103Pd BrachySeedTMPd-103 Model Pd-1 source

    International Nuclear Information System (INIS)

    Chan, Gordon H.; Prestwich, William V.

    2002-01-01

    Recently, 103 Pd brachytherapy sources have been increasingly used for interstitial implants as an alternative to 125 I sources. The BrachySeed TM Pd-103 Model Pd-1 seed is one of the latest in a series of new brachytherapy sources that have become available commercially. The dosimetric properties of the seed were investigated by Monte Carlo simulation, which was performed using the Integrated Tiger Series CYLTRAN code. Following the AAPM Task Group 43 formalism, the dose rate constant, radial dose function, and anisotropy parameters were determined. The dose rate constant, Λ, was calculated to be 0.613±3% cGy h -1 U -1 . This air kerma strength was derived from Monte Carlo simulation using the point extrapolation method. The radial dose function, g(r), was computed at distances from 0.15 to 10 cm. The anisotropy function, F(r,θ), and anisotropy factor, φ an (r), were calculated at distances from 0.5 to 7 cm. The anisotropy constant, φ(bar sign) an , was determined to be 0.978, which is closer to unity than most other 103 Pd seeds, indicating a high degree of uniformity in dose distribution. The dose rate constant and the radial dose function were also investigated by analytical modeling, which served as an independent evaluation of the Monte Carlo data, and found to be in good agreement with the Monte Carlo results

  18. Dynamic modeling of the advanced neutron source reactor

    International Nuclear Information System (INIS)

    March-Leuba, J.; Ibn-Khayat, M.

    1990-01-01

    The purpose of this paper is to provide a summary description and some applications of a computer model that has been developed to simulate the dynamic behavior of the advanced neutron source (ANS) reactor. The ANS dynamic model is coded in the advanced continuous simulation language (ACSL), and it represents the reactor core, vessel, primary cooling system, and secondary cooling systems. The use of a simple dynamic model in the early stages of the reactor design has proven very valuable not only in the development of the control and plant protection system but also of components such as pumps and heat exchangers that are usually sized based on steady-state calculations

  19. How to Model Super-Soft X-ray Sources?

    Science.gov (United States)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  20. Modeling the ascent of sounding balloons: derivation of the vertical air motion

    Directory of Open Access Journals (Sweden)

    A. Gallice

    2011-10-01

    Full Text Available A new model to describe the ascent of sounding balloons in the troposphere and lower stratosphere (up to ∼30–35 km altitude is presented. Contrary to previous models, detailed account is taken of both the variation of the drag coefficient with altitude and the heat imbalance between the balloon and the atmosphere. To compensate for the lack of data on the drag coefficient of sounding balloons, a reference curve for the relationship between drag coefficient and Reynolds number is derived from a dataset of flights launched during the Lindenberg Upper Air Methods Intercomparisons (LUAMI campaign. The transfer of heat from the surrounding air into the balloon is accounted for by solving the radial heat diffusion equation inside the balloon. In its present state, the model does not account for solar radiation, i.e. it is only able to describe the ascent of balloons during the night. It could however be adapted to also represent daytime soundings, with solar radiation modeled as a diffusive process. The potential applications of the model include the forecast of the trajectory of sounding balloons, which can be used to increase the accuracy of the match technique, and the derivation of the air vertical velocity. The latter is obtained by subtracting the ascent rate of the balloon in still air calculated by the model from the actual ascent rate. This technique is shown to provide an approximation for the vertical air motion with an uncertainty error of 0.5 m s−1 in the troposphere and 0.2 m s−1 in the stratosphere. An example of extraction of the air vertical velocity is provided in this paper. We show that the air vertical velocities derived from the balloon soundings in this paper are in general agreement with small-scale atmospheric velocity fluctuations related to gravity waves, mechanical turbulence, or other small-scale air motions measured during the SUCCESS campaign (Subsonic Aircraft: Contrail and Cloud Effects

  1. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E.; van Dam, H.; Kleiss, E.B.J.; van Uitert, G.C.; Veldhuis, D.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations.

  2. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations. (author)

  3. Integrating Global Satellite-Derived Data Products as a Pre-Analysis for Hydrological Modelling Studies: A Case Study for the Red River Basin

    Directory of Open Access Journals (Sweden)

    Gijs Simons

    2016-03-01

    Full Text Available With changes in weather patterns and intensifying anthropogenic water use, there is an increasing need for spatio-temporal information on water fluxes and stocks in river basins. The assortment of satellite-derived open-access information sources on rainfall (P and land use/land cover (LULC is currently being expanded with the application of actual evapotranspiration (ETact algorithms on the global scale. We demonstrate how global remotely sensed P and ETact datasets can be merged to examine hydrological processes such as storage changes and streamflow prior to applying a numerical simulation model. The study area is the Red River Basin in China in Vietnam, a generally challenging basin for remotely sensed information due to frequent cloud cover. Over this region, several satellite-based P and ETact products are compared, and performance is evaluated using rain gauge records and longer-term averaged streamflow. A method is presented for fusing multiple satellite-derived ETact estimates to generate an ensemble product that may be less susceptible, on a global basis, to errors in individual modeling approaches. Subsequently, monthly satellite-derived rainfall and ETact are combined to assess the water balance for individual subcatchments and types of land use, defined using a global land use classification improved based on auxiliary satellite data. It was found that a combination of TRMM rainfall and the ensemble ETact product is consistent with streamflow records in both space and time. It is concluded that monthly storage changes, multi-annual streamflow and water yield per LULC type in the Red River Basin can be successfully assessed based on currently available global satellite-derived products.

  4. A theoretical model of a liquid metal ion source

    International Nuclear Information System (INIS)

    Kingham, D.R.; Swanson, L.W.

    1984-01-01

    A model of liquid metal ion source (LMIS) operation has been developed which gives a consistent picture of three different aspects of LMI sources: (i) the shape and size of the ion emitting region; (ii) the mechanism of ion formation; (iii) properties of the ion beam such as angular intensity and energy spread. It was found that the emitting region takes the shape of a jet-like protrusion on the end of a Taylor cone with ion emission from an area only a few tens of A across, in agreement with recent TEM pictures by Sudraud. This is consistent with ion formation predominantly by field evaporation. Calculated angular intensities and current-voltage characteristics based on our fluid dynamic jet-like protrusion model agree well with experiment. The formation of doubly charged ions is attributed to post-ionization of field evaporated singly charged ions and an apex field strength of about 2.0 V A -1 was calculated for a Ga source. The ion energy spread is mainly due to space charge effects, it is known to be reduced for doubly charged ions in agreement with this post-ionization mechanism. (author)

  5. A review of catalytic hydrodeoxygenation of lignin-derived phenols from biomass pyrolysis.

    Science.gov (United States)

    Bu, Quan; Lei, Hanwu; Zacher, Alan H; Wang, Lu; Ren, Shoujie; Liang, Jing; Wei, Yi; Liu, Yupeng; Tang, Juming; Zhang, Qin; Ruan, Roger

    2012-11-01

    Catalytic hydrodeoxygenation (HDO) of lignin-derived phenols which are the lowest reactive chemical compounds in biomass pyrolysis oils has been reviewed. The hydrodeoxygenation (HDO) catalysts have been discussed including traditional HDO catalysts such as CoMo/Al(2)O(3) and NiMo/Al(2)O(3) catalysts and transition metal catalysts (noble metals). The mechanism of HDO of lignin-derived phenols was analyzed on the basis of different model compounds. The kinetics of HDO of different lignin-derived model compounds has been investigated. The diversity of bio-oils leads to the complexities of HDO kinetics. The techno-economic analysis indicates that a series of major technical and economical efforts still have to be investigated in details before scaling up the HDO of lignin-derived phenols in existed refinery infrastructure. Examples of future investigation of HDO include significant challenges of improving catalysts and optimum operation conditions, further understanding of kinetics of complex bio-oils, and the availability of sustainable and cost-effective hydrogen source. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  7. Statistical modeling of road contribution as emission sources to total suspended particles (TSP) under MCF model downtown Medellin - Antioquia - Colombia, 2004

    International Nuclear Information System (INIS)

    Gomez, Miryam; Saldarriaga, Julio; Correa, Mauricio; Posada, Enrique; Castrillon M, Francisco Javier

    2007-01-01

    Sand fields, constructions, carbon boilers, roads, and biologic sources are air-contaminant-constituent factors in down town Valle de Aburra, among others. the distribution of road contribution data to total suspended particles according to the source receptor model MCF, source correlation modeling, is nearly a gamma distribution. Chi-square goodness of fit is used to model statistically. This test for goodness of fit also allows estimating the parameters of the distribution utilizing maximum likelihood method. As convergence criteria, the estimation maximization algorithm is used. The mean of road contribution data to total suspended particles according to the source receptor model MCF, is straightforward and validates the road contribution factor to the atmospheric pollution of the zone under study

  8. Absolute measurement of LDR brachytherapy source emitted power: Instrument design and initial measurements.

    Science.gov (United States)

    Malin, Martha J; Palmer, Benjamin R; DeWerd, Larry A

    2016-02-01

    Energy-based source strength metrics may find use with model-based dose calculation algorithms, but no instruments exist that can measure the energy emitted from low-dose rate (LDR) sources. This work developed a calorimetric technique for measuring the power emitted from encapsulated low-dose rate, photon-emitting brachytherapy sources. This quantity is called emitted power (EP). The measurement methodology, instrument design and performance, and EP measurements made with the calorimeter are presented in this work. A calorimeter operating with a liquid helium thermal sink was developed to measure EP from LDR brachytherapy sources. The calorimeter employed an electrical substitution technique to determine the power emitted from the source. The calorimeter's performance and thermal system were characterized. EP measurements were made using four (125)I sources with air-kerma strengths ranging from 2.3 to 5.6 U and corresponding EPs of 0.39-0.79 μW, respectively. Three Best Medical 2301 sources and one Oncura 6711 source were measured. EP was also computed by converting measured air-kerma strengths to EPs through Monte Carlo-derived conversion factors. The measured EP and derived EPs were compared to determine the accuracy of the calorimeter measurement technique. The calorimeter had a noise floor of 1-3 nW and a repeatability of 30-60 nW. The calorimeter was stable to within 5 nW over a 12 h measurement window. All measured values agreed with derived EPs to within 10%, with three of the four sources agreeing to within 4%. Calorimeter measurements had uncertainties ranging from 2.6% to 4.5% at the k = 1 level. The values of the derived EPs had uncertainties ranging from 2.9% to 3.6% at the k = 1 level. A calorimeter capable of measuring the EP from LDR sources has been developed and validated for (125)I sources with EPs between 0.43 and 0.79 μW.

  9. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  10. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  11. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  12. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  13. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  14. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  15. Human adipose stem cell and ASC-derived cardiac progenitor cellular therapy improves outcomes in a murine model of myocardial infarction

    Directory of Open Access Journals (Sweden)

    Davy PMC

    2015-10-01

    Full Text Available Philip MC Davy,1 Kevin D Lye,2,3 Juanita Mathews,1 Jesse B Owens,1 Alice Y Chow,1 Livingston Wong,2 Stefan Moisyadi,1 Richard C Allsopp1 1Institute for Biogenesis Research, 2John A. Burns School of Medicine, University of Hawaii at Mānoa, 3Tissue Genesis, Inc., Honolulu, HI, USA Background: Adipose tissue is an abundant and potent source of adult stem cells for transplant therapy. In this study, we present our findings on the potential application of adipose-derived stem cells (ASCs as well as induced cardiac-like progenitors (iCPs derived from ASCs for the treatment of myocardial infarction. Methods and results: Human bone marrow (BM-derived stem cells, ASCs, and iCPs generated from ASCs using three defined cardiac lineage transcription factors were assessed in an immune-compromised mouse myocardial infarction model. Analysis of iCP prior to transplant confirmed changes in gene and protein expression consistent with a cardiac phenotype. Endpoint analysis was performed 1 month posttransplant. Significantly increased endpoint fractional shortening, as well as reduction in the infarct area at risk, was observed in recipients of iCPs as compared to the other recipient cohorts. Both recipients of iCPs and ASCs presented higher myocardial capillary densities than either recipients of BM-derived stem cells or the control cohort. Furthermore, mice receiving iCPs had a significantly higher cardiac retention of transplanted cells than all other groups. Conclusion: Overall, iCPs generated from ASCs outperform BM-derived stem cells and ASCs in facilitating recovery from induced myocardial infarction in mice. Keywords: adipose stem cells, myocardial infarction, cellular reprogramming, cellular therapy, piggyBac, induced cardiac-like progenitors

  16. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  17. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  18. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2018-05-01

    Full Text Available Electroencephalography (EEG source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA. ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat. Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM. We then apply the method of dynamical statistical parametric mapping (dSPM to obtain

  19. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    Science.gov (United States)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  20. Modelling and simulation of a dynamical system with the Atangana-Baleanu fractional derivative

    Science.gov (United States)

    Owolabi, Kolade M.

    2018-01-01

    In this paper, we model an ecological system consisting of a predator and two preys with the newly derived two-step fractional Adams-Bashforth method via the Atangana-Baleanu derivative in the Caputo sense. We analyze the dynamical system for correct choice of parameter values that are biologically meaningful. The local analysis of the main model is based on the application of qualitative theory for ordinary differential equations. By using the fixed point theorem idea, we establish the existence and uniqueness of the solutions. Convergence results of the new scheme are verified in both space and time. Dynamical wave phenomena of solutions are verified via some numerical results obtained for different values of the fractional index, which have some interesting ecological implications.