WorldWideScience

Sample records for exposure matrix approach

  1. Betatron coupling: Merging Hamiltonian and matrix approaches

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2005-03-01

    Full Text Available Betatron coupling is usually analyzed using either matrix formalism or Hamiltonian perturbation theory. The latter is less exact but provides a better physical insight. In this paper direct relations are derived between the two formalisms. This makes it possible to interpret the matrix approach in terms of resonances, as well as use results of both formalisms indistinctly. An approach to measure the complete coupling matrix and its determinant from turn-by-turn data is presented. Simulations using methodical accelerator design MAD-X, an accelerator design and tracking program, were performed to validate the relations and understand the scope of their application to real accelerators such as the Relativistic Heavy Ion Collider.

  2. Baryoniums - the S-matrix approach

    International Nuclear Information System (INIS)

    Roy, D.P.

    1979-08-01

    In this series of lectures the question of how the baryoniums are related to charmoniums and strangoniums is discussed and it is pointed out that in the S-matrix framework, they all follow from the same pair of hypotheses, duality and no exotics. Invoking no underlying quark structure, except that inherent in the assumption of no exotics, it is shown that there are no mesons outside the singlet and octet representation of SU(3) and no baryons outside the singlet, octet and decaplet. In other words all mesons occur within the quantum number of a q-antiq system and all baryons within those of qqq. This seems to be an experimental fact, which has no natural explanation within the S-matrix framework except that it is the minimal non-zero solution to the duality constraints. The approach in the past has been to take it as an experimental input and build up a phenomenological S-matrix framework. Lately it has been realised that the answer may come from the colour dynamics of quarks. If true this would provide an important link between the fundamental but invisible field theory of quarks and gluons and the phenomenological but visible S-matrix theory overlying it. The subject is discussed under the headings; strangonium and charmonium, baryonium, spectroscopy, baryonium resonances, FESR constraint, baryonium exchange, phenomenological estimate of ω - baryonium mixing at t = 0, and models of ω - baryonium mixing. (UK)

  3. Developing a General Population Job-Exposure Matrix in the Absence of Sufficient Exposure Monitoring Data

    OpenAIRE

    Tmannetje, AM; McLean, DJ; Eng, AJ; Kromhout, H; Kauppinen, T; Fevotte, J; Pearce, NE

    2011-01-01

    In New Zealand, there is a need for a comprehensive and accessible database with national occupational exposure information, such as a general population job-exposure matrix (GPJEM). However, few New Zealand-specific exposure data exist that could be used to construct such a GPJEM. Here, we present the methods used to develop a GPJEM for New Zealand (NZJEM), by combining GPJEMs from other countries with New Zealand-specific exposure information, using wood dust as an example to illustrate thi...

  4. Domestic tourism in Uruguay: a matrix approach

    Directory of Open Access Journals (Sweden)

    Magdalena Domínguez Pérez

    2016-01-01

    Full Text Available In this paper domestic tourism in Uruguay is analyzed by introducing an Origin-Destination matrix approach, and an attraction coefficient is calculated. We show that Montevideo is an attractive destination to every department except itself (even if it emits more trips than it receives, and the Southeast region is the main destination. Another important outcome is the importance of intra-regional patterns, associated to trips to bordering departments. Findings provide destination managers with practical knowledge, useful for reducing seasonality and attracting more domestic tourists throughout the year, as well as to deliver a better service offer, that attracts both usual visitors and new ones from competitive destinations.

  5. A random matrix approach to language acquisition

    Science.gov (United States)

    Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos

    2009-12-01

    Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.

  6. A random matrix approach to language acquisition

    International Nuclear Information System (INIS)

    Nicolaidis, A; Kosmidis, Kosmas; Argyrakis, Panos

    2009-01-01

    Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N∼exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity

  7. Job Exposure Matrix for Electric Shock Risks with Their Uncertainties

    Science.gov (United States)

    Vergara, Ximena P.; Fischer, Heidi J.; Yost, Michael; Silva, Michael; Lombardi, David A.; Kheifets, Leeka

    2015-01-01

    We present an update to an electric shock job exposure matrix (JEM) that assigned ordinal electric shocks exposure for 501 occupational titles based on electric shocks and electrocutions from two available data sources and expert judgment. Using formal expert elicitation and starting with data on electric injury, we arrive at a consensus-based JEM. In our new JEM, we quantify exposures by adding three new dimensions: (1) the elicited median proportion; (2) the elicited 25th percentile; and (3) and the elicited 75th percentile of those experiencing occupational electric shocks in a working lifetime. We construct the relative interquartile range (rIQR) based on uncertainty interval and the median. Finally, we describe overall results, highlight examples demonstrating the impact of cut point selection on exposure assignment, and evaluate potential impacts of such selection on epidemiologic studies of the electric work environment. In conclusion, novel methods allowed for consistent exposure estimates that move from qualitative to quantitative measures in this population-based JEM. Overlapping ranges of median exposure in various categories reflect our limited knowledge about this exposure. PMID:25856552

  8. Job Exposure Matrix for Electric Shock Risks with Their Uncertainties

    Directory of Open Access Journals (Sweden)

    Ximena P. Vergara

    2015-04-01

    Full Text Available We present an update to an electric shock job exposure matrix (JEM that assigned ordinal electric shocks exposure for 501 occupational titles based on electric shocks and electrocutions from two available data sources and expert judgment. Using formal expert elicitation and starting with data on electric injury, we arrive at a consensus-based JEM. In our new JEM, we quantify exposures by adding three new dimensions: (1 the elicited median proportion; (2 the elicited 25th percentile; and (3 and the elicited 75th percentile of those experiencing occupational electric shocks in a working lifetime. We construct the relative interquartile range (rIQR based on uncertainty interval and the median. Finally, we describe overall results, highlight examples demonstrating the impact of cut point selection on exposure assignment, and evaluate potential impacts of such selection on epidemiologic studies of the electric work environment. In conclusion, novel methods allowed for consistent exposure estimates that move from qualitative to quantitative measures in this population-based JEM. Overlapping ranges of median exposure in various categories reflect our limited knowledge about this exposure.

  9. Occupational exposures and chronic obstructive pulmonary disease (COPD): comparison of a COPD-specific job exposure matrix and expert-evaluated occupational exposures.

    Science.gov (United States)

    Kurth, Laura; Doney, Brent; Weinmann, Sheila

    2017-03-01

    To compare the occupational exposure levels assigned by our National Institute for Occupational Safety and Health chronic obstructive pulmonary disease-specific job exposure matrix (NIOSH COPD JEM) and by expert evaluation of detailed occupational information for various jobs held by members of an integrated health plan in the Northwest USA. We analysed data from a prior study examining COPD and occupational exposures. Jobs were assigned exposure levels using 2 methods: (1) the COPD JEM and (2) expert evaluation. Agreement (Cohen's κ coefficients), sensitivity and specificity were calculated to compare exposure levels assigned by the 2 methods for 8 exposure categories. κ indicated slight to moderate agreement (0.19-0.51) between the 2 methods and was highest for organic dust and overall exposure. Sensitivity of the matrix ranged from 33.9% to 68.5% and was highest for sensitisers, diesel exhaust and overall exposure. Specificity ranged from 74.7% to 97.1% and was highest for fumes, organic dust and mineral dust. This COPD JEM was compared with exposures assigned by experts and offers a generalisable approach to assigning occupational exposure. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Tooth Matrix Analysis for Biomonitoring of Organic Chemical Exposure: Current Status, Challenges, and Opportunities

    Science.gov (United States)

    Andra, Syam S.; Austin, Christine; Arora, Manish

    2015-01-01

    Epidemiological evidence supports associations between prenatal exposure to environmental organic chemicals and childhood health impairments. Unlike the common choice of biological matrices such as urine and blood that can be limited by short half-lives for some chemicals, teeth provide a stable repository for chemicals with half-life in the order of decades. Given the potential of the tooth bio-matrix to study long-term exposures to environmental organic chemicals in human biomonitoring programs, it is important to be aware of possible pitfalls and potential opportunities to improve on the current analytical method for tooth organics analysis. We critically review previous results of studies of this topic. The major drawbacks and challenges in currently practiced concepts and analytical methods in utilizing tooth bio-matrix are (i) no consideration of external (from outer surface) or internal contamination (from micro odontoblast processes), (ii) the misleading assumption that whole ground teeth represent prenatal exposures (latest formed dentine is lipid rich and therefore would absorb and accumulate more organic chemicals), (iii) reverse causality in exposure assessment due to whole ground teeth, and (iv) teeth are a precious bio-matrix and grinding them raises ethical concerns about appropriate use of a very limited resource in exposure biology and epidemiology studies. These can be overcome by addressing the important limitations and possible improvements with the analytical approach associated at each of the following steps (i) tooth sample preparation to retain exposure timing, (ii) organics extraction and pre-concentration to detect ultra-trace levels of analytes, (iii) chromatography separation, (iv) mass spectrometric detection to detect multi-class organics simultaneously, and (v) method validation, especially to exclude chance findings. To highlight the proposed improvements we present findings from a pilot study that utilizes tooth matrix biomarkers to

  11. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available To study the performance of a developed job exposure matrix (JEM for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP.We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys, one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes.The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders.Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  12. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Science.gov (United States)

    Solovieva, Svetlana; Pensola, Tiina; Kausto, Johanna; Shiri, Rahman; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2014-01-01

    To study the performance of a developed job exposure matrix (JEM) for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP). We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys), one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes. The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders. Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  13. Tensor operators in R-matrix approach

    International Nuclear Information System (INIS)

    Bytsko, A.G.; Rossijskaya Akademiya Nauk, St. Petersburg

    1995-12-01

    The definitions and some properties (e.g. the Wigner-Eckart theorem, the fusion procedure) of covariant and contravariant q-tensor operators for quasitriangular quantum Lie algebras are formulated in the R-matrix language. The case of U q (sl(n)) (in particular, for n=2) is discussed in more detail. (orig.)

  14. A random matrix approach to credit risk.

    Science.gov (United States)

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  15. A random matrix approach to credit risk.

    Directory of Open Access Journals (Sweden)

    Michael C Münnix

    Full Text Available We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  16. P-matrix approach and three-nucleon problem

    International Nuclear Information System (INIS)

    Babenko, V.A.; Petrov, N.M.; Teneva, G.N.

    1993-01-01

    The paper deals with the P-matrix approach application to the three strongly interacting particles systems description. On the basis of the obtained off-energy-shell scattering amplitude separable expansion in the P-matrix approach the low-energy three-particle quantities were calculated in the case of square-well potential. The results of calculations show good convergence of the calculated three-particle quantities. (author). 12 refs., 1 tab

  17. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available OBJECTIVES: The aim was to construct and validate a gender-specific job exposure matrix (JEM for physical exposures to be used in epidemiological studies of low back pain (LBP. MATERIALS AND METHODS: We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration and exposures that increase the biomechanical load on the low back (arm elevation or those that in combination with other known risk factors could be related to LBP (kneeling or squatting. Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM exposures with those of individual-based exposures. RESULTS: The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. CONCLUSIONS: The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in

  18. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Science.gov (United States)

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology.

  19. A random matrix approach to VARMA processes

    International Nuclear Information System (INIS)

    Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata

    2010-01-01

    We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.

  20. A Problem-Centered Approach to Canonical Matrix Forms

    Science.gov (United States)

    Sylvestre, Jeremy

    2014-01-01

    This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…

  1. Time delay correlations in chaotic scattering and random matrix approach

    International Nuclear Information System (INIS)

    Lehmann, N.; Savin, D.V.; Sokolov, V.V.; Sommers, H.J.

    1994-01-01

    We study the correlations in the time delay a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. The time delay correlation function, through being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the S-matrix and the real energy axis. 28 refs.; 4 figs

  2. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  3. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  4. Pesticide-exposure Matrix helps identify active ingredients in pesticides used in past years

    Science.gov (United States)

    Pesticide-exposure Matrix was developed to help epidemiologists and other researchers identify the active ingredients to which people were likely exposed when their homes and gardens were treated for pests in past years

  5. Cea-Expo: A facility exposure matrix to assess passed exposure to chemical carcinogens and radionuclides of nuclear workers

    International Nuclear Information System (INIS)

    Telle-Lamberton, M.; Bouville, P.; Bergot, D.; Gagneau, M.; Marot, S.; Telle-Lamberton, M.; Giraud, J.M.; Gelas, J.M.

    2005-01-01

    A 'Facility-Exposure Matrix' (FEM) is proposed to assess exposure to chemical carcinogens and radionuclides in a cohort of nuclear workers. Exposures are to be attributed in the following way: a worker reports to an administrative unit and/or is monitored for exposure to ionising radiation in a specific workplace. These units are connected with a list of facilities for which exposure is assessed through a group of experts. The entire process of the FEM applied in one of the nuclear centres included in the study shows that the FEM is feasible: exposure durations as well as groups of correlated exposures are presented but have to be considered as possible rather than positive exposures. Considering the number of facilities to assess (330), ways to simplify the method are proposed: (i) the list of exposures will be restricted to 18 chemical products retained from an extensive bibliography study; (ii) for each of the following classes of facilities: nuclear reactors, fuel fabrication, high-activity laboratories and radiation chemistry, accelerators and irradiators, waste treatment, biology, reprocessing, fusion, occupational exposure will be deduced from the information already gathered by the initial method. Besides taking into account confusion factors in the low doses epidemiological study of nuclear workers, the matrix should help in the assessment of internal contamination and chemical exposures in the nuclear industry. (author)

  6. Nanoparticle exposure biomonitoring: exposure/effect indicator development approaches

    Science.gov (United States)

    Marie-Desvergne, C.; Dubosson, M.; Lacombe, M.; Brun, V.; Mossuz, V.

    2015-05-01

    The use of engineered nanoparticles (NP) is more and more widespread in various industrial sectors. The inhalation route of exposure is a matter of concern (adverse effects of air pollution by ultrafine particles and asbestos). No NP biomonitoring recommendations or standards are available so far. The LBM laboratory is currently studying several approaches to develop bioindicators for occupational health applications. As regards exposure indicators, new tools are being implemented to assess potentially inhaled NP in non-invasive respiratory sampling (nasal sampling and exhaled breath condensates (EBC)). Diverse NP analytical characterization methods are used (ICP-MS, dynamic light scattering and electron microscopy coupled to energy-dispersive X-ray analysis). As regards effect indicators, a methodology has been developed to assess a range of 29 cytokines in EBCs (potential respiratory inflammation due to NP exposure). Secondly, collaboration between the LBM laboratory and the EDyp team has allowed the EBC proteome to be characterized by means of an LC-MS/MS process. These projects are expected to facilitate the development of individual NP exposure biomonitoring tools and the analysis of early potential impacts on health. Innovative techniques such as field-flow fractionation combined with ICP-MS and single particle-ICPMS are currently being explored. These tools are directly intended to assist occupational physicians in the identification of exposure situations.

  7. Nanoparticle exposure biomonitoring: exposure/effect indicator development approaches

    International Nuclear Information System (INIS)

    Marie-Desvergne, C; Dubosson, M; Mossuz, V; Lacombe, M; Brun, V

    2015-01-01

    The use of engineered nanoparticles (NP) is more and more widespread in various industrial sectors. The inhalation route of exposure is a matter of concern (adverse effects of air pollution by ultrafine particles and asbestos). No NP biomonitoring recommendations or standards are available so far. The LBM laboratory is currently studying several approaches to develop bioindicators for occupational health applications. As regards exposure indicators, new tools are being implemented to assess potentially inhaled NP in non-invasive respiratory sampling (nasal sampling and exhaled breath condensates (EBC)). Diverse NP analytical characterization methods are used (ICP-MS, dynamic light scattering and electron microscopy coupled to energy-dispersive X-ray analysis). As regards effect indicators, a methodology has been developed to assess a range of 29 cytokines in EBCs (potential respiratory inflammation due to NP exposure). Secondly, collaboration between the LBM laboratory and the EDyp team has allowed the EBC proteome to be characterized by means of an LC-MS/MS process. These projects are expected to facilitate the development of individual NP exposure biomonitoring tools and the analysis of early potential impacts on health. Innovative techniques such as field-flow fractionation combined with ICP-MS and single particle-ICPMS are currently being explored. These tools are directly intended to assist occupational physicians in the identification of exposure situations. (paper)

  8. Making LULUCF matrix of Korea by Approach 2&3

    Science.gov (United States)

    Hwang, J.; Jang, R.; Seong, M.; Yim, J.; Jeon, S. W.

    2017-12-01

    To establish and implement policies in response to climate change, it is very important to identify domestic greenhouse gas emission sources and sinks, and accurately calculate emissions and removals from each source and sink. The IPCC Guideline requires the establishment of six sectors of energy, industrial processes, solvents and other product use, agriculture, Land-Use Change and Forestry (LULUCF) and waste in estimating GHG inventories. LULUCF is divided into 6 categories according to land use, purpose, and type, and then it calculates greenhouse gas emission/absorption amount due to artificial activities according to each land use category and greenhouse gas emission/absorption amount according to land use change. The IPCC Guideline provides three approaches to how to create a LULUCF discipline matrix. According to the IPCC Guidelines, it is a principle to divide into the land use that is maintained and the land use area changed to other lands. However, Korea currently uses Approach 1, which is based on statistical data, it is difficult to detect changed area. Therefore, in this study, we are going to do a preliminary work for constructing the LULUCF matrix at Approach 2 & 3 level. NFI data, GIS, and RS data were used to build the matrix of Approach 2 method by Sampling method. For used for Approach 3, we analyzed the four thematic maps - Cadastral Map, Land Cover Map, Forest Type Map, and Biotope Map - representing land cover and utilization in terms of legal, property, quantitative and qualitative aspects. There is a difference between these maps because their purpose, resolution, timing and spatial range are different. Comparing these maps is important because it can help for decide map which is suitable for constructing the LULUCF matrix.Keywords: LULUCF, GIS/RS, IPCC Guideline, Approach 2&3, Thematic Maps

  9. Progressive delamination in polymer matrix composite laminates: A new approach

    Science.gov (United States)

    Chamis, C. C.; Murthy, P. L. N.; Minnetyan, L.

    1992-01-01

    A new approach independent of stress intensity factors and fracture toughness parameters has been developed and is described for the computational simulation of progressive delamination in polymer matrix composite laminates. The damage stages are quantified based on physics via composite mechanics while the degradation of the laminate behavior is quantified via the finite element method. The approach accounts for all types of composite behavior, laminate configuration, load conditions, and delamination processes starting from damage initiation, to unstable propagation, and to laminate fracture. Results of laminate fracture in composite beams, panels, plates, and shells are presented to demonstrate the effectiveness and versatility of this new approach.

  10. Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano

    Science.gov (United States)

    Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.

    2017-12-01

    This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to

  11. a Unified Matrix Polynomial Approach to Modal Identification

    Science.gov (United States)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  12. Compressor Surge Control Design Using Linear Matrix Inequality Approach

    OpenAIRE

    Uddin, Nur; Gravdahl, Jan Tommy

    2017-01-01

    A novel design for active compressor surge control system (ASCS) using linear matrix inequality (LMI) approach is presented and including a case study on piston-actuated active compressor surge control system (PAASCS). The non-linear system dynamics of the PAASCS is transformed into linear parameter varying (LPV) system dynamics. The system parameters are varying as a function of the compressor performance curve slope. A compressor surge stabilization problem is then formulated as a LMI probl...

  13. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  14. T -matrix approach to quark-gluon plasma

    Science.gov (United States)

    Liu, Shuai Y. F.; Rapp, Ralf

    2018-03-01

    A self-consistent thermodynamic T -matrix approach is deployed to study the microscopic properties of the quark-gluon plasma (QGP), encompassing both light- and heavy-parton degrees of freedom in a unified framework. The starting point is a relativistic effective Hamiltonian with a universal color force. The input in-medium potential is quantitatively constrained by computing the heavy-quark (HQ) free energy from the static T -matrix and fitting it to pertinent lattice-QCD (lQCD) data. The corresponding T -matrix is then applied to compute the equation of state (EoS) of the QGP in a two-particle irreducible formalism, including the full off-shell properties of the selfconsistent single-parton spectral functions and their two-body interaction. In particular, the skeleton diagram functional is fully resummed to account for emerging bound and scattering states as the critical temperature is approached from above. We find that the solution satisfying three sets of lQCD data (EoS, HQ free energy, and quarkonium correlator ratios) is not unique. As limiting cases we discuss a weakly coupled solution, which features color potentials close to the free energy, relatively sharp quasiparticle spectral functions and weak hadronic resonances near Tc, and a strongly coupled solution with a strong color potential (much larger than the free energy), resulting in broad nonquasiparticle parton spectral functions and strong hadronic resonance states which dominate the EoS when approaching Tc.

  15. Linking Expert Judgement and Trends in Occupational Exposure into a Job-Exposure Matrix for Historical Exposure to Asbestos in The Netherlands

    NARCIS (Netherlands)

    Swuste, P.; Dahhan, M.; Burdorf, A.

    2008-01-01

    The aim of this article was to describe the structure and content of a job-exposure matrix (JEM) for historical asbestos exposure in The Netherlands. The JEM contained 309 occupational job title groups in 70 branches of industry during 10 periods of 5 years during 1945– 1994, resulting in 3090

  16. Linking expert judgement and trends in occupational exposure into a job-exposure matrix for historical exposure to asbestos in The Netherlands

    NARCIS (Netherlands)

    P. Swuste (Paul); M. Dahhan; A. Burdorf (Alex)

    2008-01-01

    textabstractThe aim of this article was to describe the structure and content of a job-exposure matrix (JEM) for historical asbestos exposure in The Netherlands. The JEM contained 309 occupational job title groups in 70 branches of industry during 10 periods of 5 years during 1945-1994, resulting in

  17. Availability of a New Job-Exposure Matrix (CANJEM) for Epidemiologic and Occupational Medicine Purposes.

    Science.gov (United States)

    Siemiatycki, Jack; Lavoué, Jérôme

    2018-04-10

    To introduce the Canadian job-exposure matrix (CANJEM). Four large case-control studies of cancer were conducted in Montreal, focused on assessing occupational exposures by means of detailed interviews followed by expert assessment of possible occupational exposures. 31,673 jobs were assessed using a checklist of 258 agents (listed with prevalences at http://expostats.ca/chems). This large exposure database was configured as a JEM. CANJEM is available in four occupational classification systems. It provides estimates of probability of exposure among workers with a given occupation, and for those exposed, various metrics of exposure. CANJEM can be accessed online (www.canjem.ca) or in a batch version. CANJEM is a large source of retrospective exposure information, covering most occupations and many agents. CANJEM can be used to support exposure assessment efforts in epidemiology and occupational health.

  18. Development of a Job-Exposure Matrix (AsbJEM) to Estimate Occupational Exposure to Asbestos in Australia.

    Science.gov (United States)

    van Oyen, Svein C; Peters, Susan; Alfonso, Helman; Fritschi, Lin; de Klerk, Nicholas H; Reid, Alison; Franklin, Peter; Gordon, Len; Benke, Geza; Musk, Arthur W

    2015-07-01

    Occupational exposure data on asbestos are limited and poorly integrated in Australia so that estimates of disease risk and attribution of disease causation are usually calculated from data that are not specific for local conditions. To develop a job-exposure matrix (AsbJEM) to estimate occupational asbestos exposure levels in Australia, making optimal use of the available exposure data. A dossier of all available exposure data in Australia and information on industry practices and controls was provided to an expert panel consisting of three local industrial hygienists with thorough knowledge of local and international work practices. The expert panel estimated asbestos exposures for combinations of occupation, industry, and time period. Intensity and frequency grades were estimated to enable the calculation of annual exposure levels for each occupation-industry combination for each time period. Two indicators of asbestos exposure intensity (mode and peak) were used to account for different patterns of exposure between occupations. Additionally, the probable type of asbestos fibre was determined for each situation. Asbestos exposures were estimated for 537 combinations of 224 occupations and 60 industries for four time periods (1943-1966; 1967-1986; 1987-2003; ≥2004). Workers in the asbestos manufacturing, shipyard, and insulation industries were estimated to have had the highest average exposures. Up until 1986, 46 occupation-industry combinations were estimated to have had exposures exceeding the current Australian exposure standard of 0.1 f ml(-1). Over 90% of exposed occupations were considered to have had exposure to a mixture of asbestos varieties including crocidolite. The AsbJEM provides empirically based quantified estimates of asbestos exposure levels for Australian jobs since 1943. This exposure assessment application will contribute to improved understanding and prediction of asbestos-related diseases and attribution of disease causation. © The

  19. Progressive fracture of polymer matrix composite structures: A new approach

    Science.gov (United States)

    Chamis, C. C.; Murthy, P. L. N.; Minnetyan, L.

    1992-01-01

    A new approach independent of stress intensity factors and fracture toughness parameters has been developed and is described for the computational simulation of progressive fracture of polymer matrix composite structures. The damage stages are quantified based on physics via composite mechanics while the degradation of the structural behavior is quantified via the finite element method. The approach account for all types of composite behavior, structures, load conditions, and fracture processes starting from damage initiation, to unstable propagation and to global structural collapse. Results of structural fracture in composite beams, panels, plates, and shells are presented to demonstrate the effectiveness and versatility of this new approach. Parameters and guidelines are identified which can be used as criteria for structural fracture, inspection intervals, and retirement for cause. Generalization to structures made of monolithic metallic materials are outlined and lessons learned in undertaking the development of new approaches, in general, are summarized.

  20. Assessment of exposure to shiftwork mechanisms in the general population: the development of a new job-exposure matrix.

    Science.gov (United States)

    Fernandez, Renae C; Peters, Susan; Carey, Renee N; Davies, Michael J; Fritschi, Lin

    2014-10-01

    To develop a job-exposure matrix (JEM) that estimates exposure to eight variables representing different aspects of shiftwork among female workers. Occupational history and shiftwork exposure data were obtained from a population-based breast cancer case-control study. Exposure to light at night, phase shift, sleep disturbances, poor diet, lack of physical activity, lack of vitamin D, and graveyard and early morning shifts, was calculated by occupational code. Three threshold values based on the frequency of exposure were considered (10%, 30% and 50%) for use as cut-offs in determining exposure for each occupational code. JEM-based exposure classification was compared with that from the OccIDEAS application (job-specific questionnaires and assessment by rules) by assessing the effect on the OR for phase shift and breast cancer. Using data from the Australian Workplace Exposure Study, the specificity and sensitivity of the threshold values were calculated for each exposure variable. 127 of 413 occupational codes involved exposure to one or more shiftwork variables. Occupations with the highest probability of exposure shiftwork included nurses and midwives. Using the 30% threshold, the OR for the association between phase shift exposure and breast cancer was decreased and no longer statistically significant (OR=1.14, 95% CI 0.92 to 1.42). The 30% cut-off point demonstrated best specificity and sensitivity, although results varied between exposure variables. This JEM provides a set of indicators reflecting biologically plausible mechanisms for the potential impact of shiftwork on health and may provide an alternative method of exposure assessment in the absence of detailed job history and exposure data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. DEVELOPMENT OF THE STRUCTURAL MATRIX APPROACH IN ORGANIZATIONAL DIAGNOSTICS

    Directory of Open Access Journals (Sweden)

    Mishlanova Marina Yur'evna

    2012-07-01

    The proposed approach discloses private constituents of elements, communications, organizational layers, generalized characteristics of layers, and partial effects. This approach may be used to simulate a system of forces, items of pressure, and organizational problems. The most advanced state of stability and sustainable development is now provided with the structure within which the elements remain in certain natural interdependence (symmetry, or balance. Formation of this model is based on thorough diagnostics of an organization through the employment of the structural matrix approach and the audit of the following characteristics: labour efficiency, reliability and flexibility of communications, uniformity of distribution of communications and their coordination, connectivity of elements and layers with account for their impact, degree of freedom of elements, layers and the system as a whole, reliability, rigidity, adaptability, stability of the organizational structure.

  2. Volatility of an Indian stock market: A random matrix approach

    International Nuclear Information System (INIS)

    Kulkarni, V.; Deo, N.

    2006-07-01

    We examine volatility of an Indian stock market in terms of aspects like participation, synchronization of stocks and quantification of volatility using the random matrix approach. Volatility pattern of the market is found using the BSE index for the three-year period 2000- 2002. Random matrix analysis is carried out using daily returns of 70 stocks for several time windows of 85 days in 2001 to (i) do a brief comparative analysis with statistics of eigenvalues and eigenvectors of the matrix C of correlations between price fluctuations, in time regimes of different volatilities. While a bulk of eigenvalues falls within RMT bounds in all the time periods, we see that the largest (deviating) eigenvalue correlates well with the volatility of the index, the corresponding eigenvector clearly shows a shift in the distribution of its components from volatile to less volatile periods and verifies the qualitative association between participation and volatility (ii) observe that the Inverse participation ratio for the last eigenvector is sensitive to market fluctuations (the two quantities are observed to anti correlate significantly) (iii) set up a variability index, V whose temporal evolution is found to be significantly correlated with the volatility of the overall market index. MIRAMAR (author)

  3. On the ``Matrix Approach'' to Interacting Particle Systems

    Science.gov (United States)

    de Sanctis, L.; Isopi, M.

    2004-04-01

    Derrida et al. and Schütz and Stinchcombe gave algebraic formulas for the correlation functions of the partially asymmetric simple exclusion process. Here we give a fairly general recipe of how to get these formulas and extend them to the whole time evolution (starting from the generator of the process), for a certain class of interacting systems. We then analyze the algebraic relations obtained to show that the matrix approach does not work with some models such as the voter and the contact processes.

  4. Distance matrix-based approach to protein structure prediction.

    Science.gov (United States)

    Kloczkowski, Andrzej; Jernigan, Robert L; Wu, Zhijun; Song, Guang; Yang, Lei; Kolinski, Andrzej; Pokarowski, Piotr

    2009-03-01

    Much structural information is encoded in the internal distances; a distance matrix-based approach can be used to predict protein structure and dynamics, and for structural refinement. Our approach is based on the square distance matrix D = [r(ij)(2)] containing all square distances between residues in proteins. This distance matrix contains more information than the contact matrix C, that has elements of either 0 or 1 depending on whether the distance r (ij) is greater or less than a cutoff value r (cutoff). We have performed spectral decomposition of the distance matrices D = sigma lambda(k)V(k)V(kT), in terms of eigenvalues lambda kappa and the corresponding eigenvectors v kappa and found that it contains at most five nonzero terms. A dominant eigenvector is proportional to r (2)--the square distance of points from the center of mass, with the next three being the principal components of the system of points. By predicting r (2) from the sequence we can approximate a distance matrix of a protein with an expected RMSD value of about 7.3 A, and by combining it with the prediction of the first principal component we can improve this approximation to 4.0 A. We can also explain the role of hydrophobic interactions for the protein structure, because r is highly correlated with the hydrophobic profile of the sequence. Moreover, r is highly correlated with several sequence profiles which are useful in protein structure prediction, such as contact number, the residue-wise contact order (RWCO) or mean square fluctuations (i.e. crystallographic temperature factors). We have also shown that the next three components are related to spatial directionality of the secondary structure elements, and they may be also predicted from the sequence, improving overall structure prediction. We have also shown that the large number of available HIV-1 protease structures provides a remarkable sampling of conformations, which can be viewed as direct structural information about the

  5. Scattering matrix approach to non-stationary quantum transport

    CERN Document Server

    Moskalets, Michael V

    2012-01-01

    The aim of this book is to introduce the basic elements of the scattering matrix approach to transport phenomena in dynamical quantum systems of non-interacting electrons. This approach admits a physically clear and transparent description of transport processes in dynamical mesoscopic systems promising basic elements of solid-state devices for quantum information processing. One of the key effects, the quantum pump effect, is considered in detail. In addition, the theory for a recently implemented new dynamical source - injecting electrons with time delay much larger than the electron coherence time - is offered. This theory provides a simple description of quantum circuits with such a single-particle source and shows in an unambiguous way that the tunability inherent to the dynamical systems leads to a number of unexpected but fundamental effects.

  6. SYN-JEM : A Quantitative Job-Exposure Matrix for Five Lung Carcinogens

    NARCIS (Netherlands)

    Peters, Susan; Vermeulen, Roel; Portengen, L??tzen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; LavouCrossed Sign, Jcrossed D Signr??me; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Br??ning, Thomas; Straif, Kurt; Kromhout, Hans

    2016-01-01

    OBJECTIVE The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic hydrocarbons

  7. SYN-JEM : A Quantitative Job-Exposure Matrix for Five Lung Carcinogens

    NARCIS (Netherlands)

    Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann C.; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; LavouCrossed Sign, Jcrossed D.Signrôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans

    2016-01-01

    Objective: The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic

  8. Effects of LDEF flight exposure on selected polymer matrix resin composite materials

    Science.gov (United States)

    Slemp, Wayne S.; Young, Philip R.; Witte, William G., Jr.; Shen, James Y.

    1992-01-01

    The characterization of selected graphite fiber reinforced epoxy (934 and 5208) and polysulfone (P1700) matrix resin composites materials which received over five years and nine months of exposure to the low earth orbit (LEO) environment in experiment AO134 on the Long Duration Exposure Facility is reported. The changes in mechanical properties of ultimate tensile strength and tensile modulus for exposed flight specimens are compared to the three sets of control specimens. Marked changes in surface appearance are discussed, and resin loss is reported. The chemical characterization including infrared, thermal, and selected solution property measurements showed that the molecular structure of the polymetric matrix had not changed significantly in response to this exposure.

  9. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  10. Case studies on the use of the 'risk matrix' approach for accident prevention in radiotherapy

    International Nuclear Information System (INIS)

    Dumenigo, Cruz; Vilaragut, Juan J.; Soler, Karen; Cruz, Yoanis; Batista, Fidel; Morales, Jorge L.; Perez, Adrian; Farlane, Teresa Mc.; Guerrero, Mayrka

    2010-01-01

    External beam radiotherapy is the only practice during which humans are directly exposed to a radiation beam to receive high doses. Accidental exposures have occurred throughout the world, thus showing the need for systematic safety assessments, capable to identify preventive measures and to minimize consequences of accidental exposure. The 'risk matrix' approach is a semi quantitative method to evaluate the likelihood and the severity of events by means of a scale, and defines acceptability criteria on the basis of the risk. For each accident sequence identified, the following questions come up: how often is it?, how severe are the consequences? and, what safety measures should be taken to prevent it?. From these answers we can obtain the resulting risk by using the 'Risk Matrix' table. In this study we have used this method to conduct the study in 3 cases (real radiotherapy departments). The case study identified the major weaknesses in radiotherapy service and proposed measures to reduce the risk of accidents. The method is practical and it could be applied in hospitals. This approach allows regulators to improve the quality of their inspections and the rigor of the assessments made to grant the operating license to the entities working with radiotherapy. (author)

  11. Mass balance modelling of contaminants in river basins: a flexible matrix approach.

    Science.gov (United States)

    Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay

    2005-12-01

    A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.

  12. Application of Matrix Projection Exposure Using a Liquid Crystal Display Panel to Fabricate Thick Resist Molds

    Science.gov (United States)

    Fukasawa, Hirotoshi; Horiuchi, Toshiyuki

    2009-08-01

    The patterning characteristics of matrix projection exposure using an analog liquid crystal display (LCD) panel in place of a reticle were investigated, in particular for oblique patterns. In addition, a new method for fabricating practical thick resist molds was developed. At first, an exposure system fabricated in past research was reconstructed. Changes in the illumination optics and the projection lens were the main improvements. Using fly's eye lenses, the illumination light intensity distribution was homogenized. The projection lens was changed from a common camera lens to a higher-grade telecentric lens. In addition, although the same metal halide lamp was used as an exposure light source, the central exposure wavelength was slightly shortened from 480 to 450 nm to obtain higher resist sensitivity while maintaining almost equivalent contrast between black and white. Circular and radial patterns with linewidths of approximately 6 µm were uniformly printed in all directions throughout the exposure field owing to these improvements. The patterns were smoothly printed without accompanying stepwise roughness caused by the cell matrix array. On the bases of these results, a new method of fabricating thick resist molds for electroplating was investigated. It is known that thick resist molds fabricated using the negative resist SU-8 (Micro Chem) are useful because very high aspect patterns are printable and the side walls are perpendicular to the substrate surfaces. However, the most suitable exposure wavelength of SU-8 is 365 nm, and SU-8 is insensitive to light of 450 nm wavelength, which is most appropriate for LCD matrix exposure. For this reason, a novel multilayer resist process was proposed, and micromolds of SU-8 of 50 µm thickness were successfully obtained. As a result, feasibility for fabricating complex resist molds including oblique patterns was demonstrated.

  13. Development of an agricultural job-exposure matrix for British Columbia, Canada.

    Science.gov (United States)

    Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel

    2002-09-01

    Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental

  14. Evaluation of cumulative PCB exposure estimated by a job exposure matrix versus PCB serum concentrations

    Science.gov (United States)

    Ruder, Avima M.; Succop, Paul; Waters, Martha A.

    2015-01-01

    Although polychlorinated biphenyls (PCBs) have been banned in many countries for more than three decades, exposures to PCBs continue to be of concern due to their long half-lives and carcinogenic effects. In National Institute for Occupational Safety and Health studies, we are using semiquantitative plant-specific job exposure matrices (JEMs) to estimate historical PCB exposures for workers (n=24,865) exposed to PCBs from 1938 to 1978 at three capacitor manufacturing plants. A subcohort of these workers (n=410) employed in two of these plants had serum PCB concentrations measured at up to four times between 1976 and 1989. Our objectives were to evaluate the strength of association between an individual worker’s measured serum PCB levels and the same worker’s cumulative exposure estimated through 1977 with the (1) JEM and (2) duration of employment, and to calculate the explained variance the JEM provides for serum PCB levels using (3) simple linear regression. Consistent strong and statistically significant associations were observed between the cumulative exposures estimated with the JEM and serum PCB concentrations for all years. The strength of association between duration of employment and serum PCBs was good for highly chlorinated (Aroclor 1254/HPCB) but not less chlorinated (Aroclor 1242/LPCB) PCBs. In the simple regression models, cumulative occupational exposure estimated using the JEMs explained 14–24 % of the variance of the Aroclor 1242/LPCB and 22–39 % for Aroclor 1254/HPCB serum concentrations. We regard the cumulative exposure estimated with the JEM as a better estimate of PCB body burdens than serum concentrations quantified as Aroclor 1242/LPCB and Aroclor 1254/HPCB. PMID:23475397

  15. Matrix approach for the design of service delivery systems

    Directory of Open Access Journals (Sweden)

    Dina Davis-Castro

    2017-04-01

    Full Text Available The gradual change that is occurring from a purely functional approach to one of process, is conditioned from outside and from within organizations. The world changes faster, from one to the next decade. The politic-economic, social and technological conditions of the first half of the 20th century in the world in general, and in Latin America in particular, are far from the last 50 years of the 20th century. The number of changes that have occurred in the first 15 years of the 21st century is also immeasurable. There is a close relationship between the speed of technological change and changes in a good measure socioeconomic.  It leads to uncertain ways of planning the organizations.  Insofar as the cycles innovation - development are shortened, there is a gap between the appearance of products with superior performance in the market and the duration of similar goods previously acquired by users. This phenomenon shows a trend in the behavior of customers. Clients tends to request services solutions, rather than on specific products.  This essay examines some of the phenomena that affect the design of processes of services with a matrix approach, Result of research carried out in this field.

  16. Control-matrix approach to stellarator design and control

    International Nuclear Information System (INIS)

    Mynick, H. E.; Pomphrey, N.

    2000-01-01

    The full space Z(equivalent to){Z j=1,...,Nz } of independent variables defining a stellarator configuration is large. To find attractive design points in this space, or to understand operational flexibility about a given design point, one needs insight into the topography in Z-space of the physics figures of merit P i which characterize the machine performance, and means of determining those directions in Z-space which give one independent control over the P i , as well as those which affect none of them, and so are available for design flexibility. The control matrix (CM) approach described here provides a mathematical means of obtaining these. In this work, the CM approach is described and used in studying some candidate Quasi-Axisymmetric (QA) stellarator configurations the National Compact Stellarator Experiment design group has been considering. In the process of the analysis, a first exploration of the topography of the configuration space in the vicinity of these candidate systems has been performed, whose character is discussed

  17. Control-matrix approach to stellarator design and control

    International Nuclear Information System (INIS)

    Mynick, H.E.; Pomphrey, N.

    2000-01-01

    The full space Z always equal to {Zj=1,..Nz} of independent variables defining a stellarator configuration is large. To find attractive design points in this space, or to understand operational flexibility about a given design point, one needs insight into the topography in Z-space of the physics figures of merit Pi which characterize the machine performance, and means of determining those directions in Z-space which give one independent control over the Pi, as well as those which affect none of them, and so are available for design flexibility. The control matrix (CM) approach described here provides a mathematical means of obtaining these. In this work, the authors describe the CM approach and use it in studying some candidate Quasi-Axisymmetric (QA) stellarator configurations the NCSX design group has been considering. In the process of the analysis, a first exploration of the topography of the configuration space in the vicinity of these candidate systems has been performed, whose character is discussed

  18. Patch Grafting Using an Ologen Collagen Matrix to Manage Tubal Exposure in Glaucoma Tube Shunt Surgery

    Directory of Open Access Journals (Sweden)

    Masaki Tanito

    2018-01-01

    Full Text Available Purpose: To report the results using an ologen Collagen Matrix as a patch graft in eyes with tubal exposure after tube shunt surgery. Case Reports: Case 1 was an 82-year-old man with tubal exposure in his right eye 26 months after receiving a Baerveldt glaucoma implant. The tube was covered by surrounding conjunctival tissue combined with subconjunctival placement of an ologen Collagen Matrix as a patch graft. Two years after implantation, the tube was not exposed. Anterior-segment optical coherence tomography (AS-OCT showed dense conjunctival tissue over the tube. Case 2 was an 82-year-old man with peripheral keratitis, anterior scleritis, and secondary glaucoma in the right eye who underwent tube shunt surgery using an Ahmed glaucoma valve and cataract surgery. Intraoperatively, scleritis-related scleral thinning prevented the tube from being covered fully by an autologous scleral flap. An ologen Collagen Matrix was placed over the scleral flap as a patch graft. Seventeen months after implantation, the tube was not exposed. Case 3 was a 52-year-old man with diabetic maculopathy and steroid-induced glaucoma in the right eye who underwent tube shunt surgery using an Ahmed glaucoma valve. Intraoperatively, a flap defect prevented the tube from being covered fully by an autologous scleral flap. An ologen Collagen Matrix was placed over the scleral flap as a patch graft. Three weeks postoperatively, AS-OCT showed thick subconjunctival tissue over the tube. Three months after implantation, the tube was not exposed. Conclusions: The ologen Collagen Matrix can be used successfully as a patch graft to prevent and treat tubal exposure after tube shunt surgery.

  19. Setting research priorities by applying the combined approach matrix.

    Science.gov (United States)

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  20. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  1. An alternative approach to KP hierarchy in matrix models

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong, C.S.

    1992-01-01

    We show that there exists an alternative procedure in order to extract differential hierarchies, such as the KdV hierarchy, from one-matrix models, without taking a continuum limit. To prove this we introduce the Toda lattice and reformulate it in operator form. We then consider the reduction to the systems appropriate for a one-matrix model. (orig.)

  2. Stimulus threat and exposure context modulate the effect of mere exposure on approach behaviors

    OpenAIRE

    Steven Young; Heather Claypool; Isaiah Jones

    2016-01-01

    Mere-exposure research has found that initially neutral objects made familiar are preferred relative to novel objects. Recent work extends these preference judgments into the behavioral domain by illustrating that mere exposure prompts approach-oriented behavior toward familiar stimuli. However, no investigations have examined the effect of mere exposure on approach-oriented behavior toward threatening stimuli. The current work examines this issue and also explores how exposure context intera...

  3. The transfer matrix approach to circular graphene quantum dots

    International Nuclear Information System (INIS)

    Nguyen, H Chau; Nguyen, Nhung T T; Nguyen, V Lien

    2016-01-01

    We adapt the transfer matrix (T -matrix) method originally designed for one-dimensional quantum mechanical problems to solve the circularly symmetric two-dimensional problem of graphene quantum dots. Similar to one-dimensional problems, we show that the generalized T -matrix contains rich information about the physical properties of these quantum dots. In particular, it is shown that the spectral equations for bound states as well as quasi-bound states of a circular graphene quantum dot and related quantities such as the local density of states and the scattering coefficients are all expressed exactly in terms of the T -matrix for the radial confinement potential. As an example, we use the developed formalism to analyse physical aspects of a graphene quantum dot induced by a trapezoidal radial potential. Among the obtained results, it is in particular suggested that the thermal fluctuations and electrostatic disorders may appear as an obstacle to controlling the valley polarization of Dirac electrons. (paper)

  4. Neutrino Mass Matrix Textures: A Data-driven Approach

    CERN Document Server

    Bertuzzo, E; Machado, P A N

    2013-01-01

    We analyze the neutrino mass matrix entries and their correlations in a probabilistic fashion, constructing probability distribution functions using the latest results from neutrino oscillation fits. Two cases are considered: the standard three neutrino scenario as well as the inclusion of a new sterile neutrino that potentially explains the reactor and gallium anomalies. We discuss the current limits and future perspectives on the mass matrix elements that can be useful for model building.

  5. Development of a source-exposure matrix for occupational exposure assessment of electromagnetic fields in the INTEROCC study.

    Science.gov (United States)

    Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth

    2017-07-01

    To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. The mean estimates were weighted by our confidence in the pooled measurements. The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for radiofrequency (RF) sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For extremely low frequency sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (high-voltage transmission line-hotsticks), whereas magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (tungsten inert gas welding). The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents.

  6. A population-based job exposure matrix for power-frequency magnetic fields.

    Science.gov (United States)

    Bowman, Joseph D; Touchstone, Jennifer A; Yost, Michael G

    2007-09-01

    A population-based job exposure matrix (JEM) was developed to assess personal exposures to power-frequency magnetic fields (MF) for epidemiologic studies. The JEM compiled 2,317 MF measurements taken on or near workers by 10 studies in the United States, Sweden, New Zealand, Finland, and Italy. A database was assembled from the original data for six studies plus summary statistics grouped by occupation from four other published studies. The job descriptions were coded into the 1980 Standard Occupational Classification system (SOC) and then translated to the 1980 job categories of the U.S. Bureau of the Census (BOC). For each job category, the JEM database calculated the arithmetic mean, standard deviation, geometric mean, and geometric standard deviation of the workday-average MF magnitude from the combined data. Analysis of variance demonstrated that the combining of MF data from the different sources was justified, and that the homogeneity of MF exposures in the SOC occupations was comparable to JEMs for solvents and particulates. BOC occupation accounted for 30% of the MF variance (p job variance to the total of within- and between-job variances) was 88%. Jobs lacking data had their exposures inferred from measurements on similar occupations. The JEM provided MF exposures for 97% of the person-months in a population-based case-control study and 95% of the jobs on death certificates in a registry study covering 22 states. Therefore, we expect this JEM to be useful in other population-based epidemiologic studies.

  7. A flexible matrix-based human exposure assessment framework suitable for LCA and CAA

    DEFF Research Database (Denmark)

    Jolliet, Olivier; Ernstoff, Alexi; Huang, Lei

    2016-01-01

    are not applicable to all types of near-field chemical releases from consumer products, e.g. direct dermal application. A consistent near-and far-field framework is needed for life cycle assessment (LCA) and chemical alternative assessment (CAA) to inform mitigation of human exposure to harmful chemicals. To close......Humans can be exposed to chemicals via near-field exposure pathways (e.g. through consumer product use) and far-field exposure pathways (e.g. through environmental emissions along product life cycles). Pathways are often complex where chemicals can transfer directly from products to humans during...... use or exchange between near-and far-field compartments until sub -fractions reach humans via inhalation, ingestion or dermal uptake. Currently, however, multimedia exposure models mainly focus on far-field exposure pathways. Metrics and modeling approaches used in far-field, emission-based models...

  8. An Innovative Approach to Balancing Chemical-Reaction Equations: A Simplified Matrix-Inversion Technique for Determining The Matrix Null Space

    OpenAIRE

    Thorne, Lawrence R.

    2011-01-01

    I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...

  9. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  10. Exploring multicollinearity using a random matrix theory approach.

    Science.gov (United States)

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  11. Matrix converter controlled with the direct transfer function approach

    DEFF Research Database (Denmark)

    Rodriguez, J.; Silva, E.; Blaabjerg, Frede

    2005-01-01

    Power electronics is an emerging technology. New power circuits are invented and have to be introduced into the power electronics curriculum. One of the interesting new circuits is the matrix converter (MC), and this paper analyses its working principles. A simple model is proposed to represent...

  12. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  13. Matrix of occupational exposures to carcinogenic agents and pesticides in Costa Rica

    International Nuclear Information System (INIS)

    Chaves, Jorge; Partanen, Timo; Wesseling, Catharina; Chaverri, Fabio; Monge, Patricia; Ruepert, Clemens; Guardado, Jorge; Aragon, Aurora; Kauppinen, Timo

    2004-01-01

    The European data system CAREX converts national numbers of workers in 55 sectors and estimated proportions of workers exposed to carcinogenic agents into numbers of workers exposed to each agent. CAREX is applied and modified in Costa Rica (TICAREX) for the first time outside Europe. 27 carcinogenic agents and 7 groups of pesticides were included. Numbers of exposed were estimated separately for men and women. The most frequent agents in the 1.3 million labor force of Costa Rica were solar radiation (333,000 workers); diesel engine emissions (278,000); paraquat and diquat (175,000); environmental tobacco smoke (71,000); chromium (VI) compounds (55,000); benzene (52,000); mancozeb, maneb and zineb (49,000); chlorothalonil (38,000); wood dust (32,000); silica dust (27,000); benomyl (19,000); lead and its inorganic compounds (19,000); tetrachloroethylene (18,000); and polycyclic aromatic hydrocarbons (17,000). Owing to the different occupational distribution between the genders, formaldehyde, radon and methylene chloride were more frequent than pesticides, chromium (VI), wood dust, and silica dust in women. Agriculture, construction, personal and domestic services, land and water transport and allied services, pottery and similar industries, manufacture of wood products, mining, forestry, fishing, manufacture of electric products, and bars and restaurants were sectors with frequent exposures. Substantial reduction of occupational and environmental exposures to these agents would improve considerably public and occupational health. Reduction of occupational exposures is usually also followed by improvement of environmental quality. Monitoring of exposures and health of workers and the general public is essential in the control of environmental contamination and human exposures. This report presents details of the exposures matrix, which is the basis of TICAREX. (author) [es

  14. Electric shocks at work in Europe: development of a job exposure matrix.

    Science.gov (United States)

    Huss, Anke; Vermeulen, Roel; Bowman, Joseph D; Kheifets, Leeka; Kromhout, Hans

    2013-04-01

    Electric shocks have been suggested as a potential risk factor for neurological disease, in particular for amyotrophic lateral sclerosis. While actual exposure to shocks is difficult to measure, occurrence and variation of electric injuries could serve as an exposure proxy. We assessed risk of electric injury, using occupational accident registries across Europe to develop an electric shock job-exposure-matrix (JEM). Injury data were obtained from five European countries, and the number of workers per occupation and country from EUROSTAT was compiled at a 3-digit International Standard Classification of Occupations 1988 level. We pooled accident rates across countries with a random effects model and categorised jobs into low, medium and high risk based on the 75th and 90th percentile. We next compared our JEM to a JEM that classified extremely low frequency magnetic field exposure of jobs into low, medium and high. Of 116 job codes, occupations with high potential for electric injury exposure were electrical and electronic equipment mechanics and fitters, building frame workers and finishers, machinery mechanics and fitters, metal moulders and welders, assemblers, mining and construction labourers, metal-products machine operators, ships' decks crews and power production and related plant operators. Agreement between the electrical injury and magnetic field JEM was 67.2%. Our JEM classifies occupational titles according to risk of electric injury as a proxy for occurrence of electric shocks. In addition to assessing risk potentially arising from electric shocks, this JEM might contribute to disentangling risks from electric injury from those of extremely low frequency magnetic field exposure.

  15. Development of a total hydrocarbon ordinal job-exposure matrix for workers responding to the Deepwater Horizon disaster: The GuLF STUDY.

    Science.gov (United States)

    Stewart, Patricia A; Stenzel, Mark R; Ramachandran, Gurumurthy; Banerjee, Sudipto; Huynh, Tran B; Groth, Caroline P; Kwok, Richard K; Blair, Aaron; Engel, Lawrence S; Sandler, Dale P

    2018-05-01

    The GuLF STUDY is a cohort study investigating the health of workers who responded to the Deepwater Horizon oil spill in the Gulf of Mexico in 2010. The objective of this effort was to develop an ordinal job-exposure matrix (JEM) of airborne total hydrocarbons (THC), dispersants, and particulates to estimate study participants' exposures. Information was collected on participants' spill-related tasks. A JEM of exposure groups (EGs) was developed from tasks and THC air measurements taken during and after the spill using relevant exposure determinants. THC arithmetic means were developed for the EGs, assigned ordinal values, and linked to the participants using determinants from the questionnaire. Different approaches were taken for combining exposures across EGs. EGs for dispersants and particulates were based on questionnaire responses. Considerable differences in THC exposure levels were found among EGs. Based on the maximum THC level participants experienced across any job held, ∼14% of the subjects were identified in the highest exposure category. Approximately 10% of the cohort was exposed to dispersants or particulates. Considerable exposure differences were found across the various EGs, facilitating investigation of exposure-response relationships. The JEM is flexible to allow for different assumptions about several possibly relevant exposure metrics.

  16. Stimulus threat and exposure context modulate the effect of mere exposure on approach behaviors

    Directory of Open Access Journals (Sweden)

    Steven Young

    2016-11-01

    Full Text Available Mere-exposure research has found that initially neutral objects made familiar are preferred relative to novel objects. Recent work extends these preference judgments into the behavioral domain by illustrating that mere exposure prompts approach-oriented behavior toward familiar stimuli. However, no investigations have examined the effect of mere exposure on approach-oriented behavior toward threatening stimuli. The current work examines this issue and also explores how exposure context interacts with stimulus threat to influence behavioral tendencies. In two experiments participants were presented with both mere-exposed and novel stimuli and approach speed was assessed. In the first experiment, when stimulus threat was presented in a homogeneous format (i.e., participants viewed exclusively neutral or threatening stimuli, mere-exposure potentiated approach behaviors for both neutral and threatening stimuli. However, in the second experiment, in which stimulus threat was presented in a heterogeneous fashion (i.e., participants viewed both neutral and threatening stimuli, mere exposure facilitated approach only for initially neutral stimuli. These results suggest that mere-exposure effects on approach behaviors are highly context sensitive and depend on both stimulus valence and exposure context. Further implications of these findings for the mere-exposure literature are discussed.

  17. A Hybrid ACO Approach to the Matrix Bandwidth Minimization Problem

    Science.gov (United States)

    Pintea, Camelia-M.; Crişan, Gloria-Cerasela; Chira, Camelia

    The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated {NP}-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.

  18. Influence of short-term aluminum exposure on demineralized bone matrix induced bone formation

    Energy Technology Data Exchange (ETDEWEB)

    Severson, A.R. (Minnesota Univ., Duluth, MN (United States). Dept. of Anatomy and Cell Biology); Haut, C.F.; Firling, C.E. (Minnesota Univ., Duluth, MN (United States). Dept. of Biology); Huntley, T.E. (Minnesota Univ., Duluth, MN (United States). Dept. of Biochemistry and Molecular Biology)

    1992-12-01

    The effects of aluminum exposure on bone formation employing the demineralized bone matrix (DBM) induced bone development model were studied using 4-week-old Sprague-Dawley rats injected with a saline (control) or an aluminum chloride (experimental) solution. After 2 weeks of aluminum treatment, 20-mg portions of rat DBM were implanted subcutaneously on each side in the thoracic region of the control and experimental rats. Animals were killed 7, 12, or 21 days after implantation of the DBM and the developing plaques removed. No morphological, histochemical, or biochemical differences were apparent between plaques from day 7 control and experimental rats. Plaques from day 12 control and experimental rats exhibited cartilage formation and alkaline phosphatase activity localized in osteochondrogenic cells, chondrocytes, osteoblasts, and extracellular matrix. Unlike the plaques from control rats that contained many osteoblastic mineralizing fronts, the plaques from the 12-day experimental group had a preponderance of cartilaginous tissue, no evidence of mineralization, increased levels of alkaline phosphatase activity, and a reduced calcium content. Plaques developing for 21 days in control animals demonstrated extensive new bone formation and bone marrow development, while those in the experimental rats demonstrated unmineralized osteoid-like matrix with poorly developed bone marrow. Alkaline phosphatase activity of the plaques continued to remain high on day 21 for the control and experimental groups. Calcium levels were significantly reduced in the experimental group. These biochemical changes correlated with histochemical reductions in bone calcification. Thus, aluminum administration to rats appears to alter the differentiation and calcification of developing cartilage and bone in the DBM-induced bone formation model and suggests that aluminum by some mechanism alters the matrix calcification in growing bones. (orig.).

  19. Mueller Matrix: the Consummate approach to imaging in torbid media

    Science.gov (United States)

    Zhai, Peng-Wang; Kattawar, George W.

    2004-10-01

    The use of polarized light has important applications in astronomy, atmospheric science, chemistry, biology, interferometry, medical science, quantum theory, and the commercial sector. The four component Stokes vector is one of the most popular ways to describe polarized states of light and the 4×4 Mueller matrix is used to express the relations between the Stokes vectors of the incident light and the scattered light. Of the many methods to calculate the single scattering Mueller matrix, we will emphasize the Mie theory; the T-matrix method; the finite-element method (FEM); the finite-difference time-domain method (FDTD); the discrete dipole approximation (DDA). The single scattering Mueller matrices for particles can be used to solve the radiative transfer equations for multiple scattering systems, which is the sine que non for the remote sensing applications. Of the many ways to solve the radiative transfer equations we will discuss the discrete-ordinate method, the adding and doubling method, and the Monte-Carlo method, which is by far the most versatile.

  20. A novel matrix approach for controlling the invariant densities of chaotic maps

    International Nuclear Information System (INIS)

    Rogers, Alan; Shorten, Robert; Heffernan, Daniel M.

    2008-01-01

    Recent work on positive matrices has resulted in a new matrix method for generating chaotic maps with arbitrary piecewise constant invariant densities, sometimes known as the inverse Frobenius-Perron problem (IFPP). In this paper, we give an extensive introduction to the IFPP, describing existing methods for solving it, and we describe our new matrix approach for solving the IFPP

  1. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  2. Digraph Matrix Analysis: A new approach to systems interaction analysis

    International Nuclear Information System (INIS)

    Sacks, I.J.; Alesso, H.P.; Ashmore, B.C.

    1985-01-01

    The term Systems Interaction was introduced by the Nuclear Regulatory Commission to identify interdependency of safety and support systems. Digraph Matrix Analysis was developed to allow the determination of these interdependencies. The main features of DMA are: the reliability model is traced directly from system schematics, all components of front line and support systems are included in a single integrated model, and the model is processed automatically with no heuristic culling applied. The recent application of DMA to the Indian Point-3 systems interaction analysis resulted in the discovery of several significant deeply hidden systems interactions

  3. Transfer-matrix approach for modulated structures with defects

    International Nuclear Information System (INIS)

    Kostyrko, T.

    2000-01-01

    We consider scattering of electrons by defects in a periodically modulated, quasi-one-dimensional structure, within a tight-binding model. Combining a transfer matrix method and a Green function method we derive a formula for a Landauer conductance and show its equivalence to the result of Kubo linear response theory. We obtain explicitly unperturbed lattice Green functions from their equations of motion, using the transfer matrices. We apply the presented formalism in computations of the conductance of several multiband modulated structures with defects: (a) carbon nanotubes (b) two-dimensional (2D) superlattice (c) modulated leads with 1D wire in the tunneling regime. (c) 2000 The American Physical Society

  4. Human exposure assessment: Approaches for chemicals (REACH) and biocides (BPD)

    NARCIS (Netherlands)

    Hemmen, J.J. van; Gerritsen-Ebben, R.

    2008-01-01

    The approaches that are indicated in the various guidance documents for the assessment of human exposure for chemicals and biocides are summarised. This reflects the TNsG (Technical notes for Guidance) version 2: human exposure assessment for biocidal products (1) under the BPD (Biocidal Products

  5. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  6. Random matrix approach to the dynamics of stock inventory variations

    International Nuclear Information System (INIS)

    Zhou Weixing; Mu Guohua; Kertész, János

    2012-01-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient C ij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ 1 and λ 2 ) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ 1 ) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients C VR between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets. (paper)

  7. Random matrix approach to cross correlations in financial data

    Science.gov (United States)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  8. Random matrix approach to the dynamics of stock inventory variations

    Science.gov (United States)

    Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János

    2012-09-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.

  9. Risk exposure mitigation: Approaches and recognised instruments (5

    Directory of Open Access Journals (Sweden)

    Matić Vesna

    2014-01-01

    Full Text Available The risk management function development in banks, along with the development of tools that banks can use throughout this process, has had the strong support in international standards, not only in the recommended approaches for calculating economic capital requirements, but also in the qualitatively new treatment of risk exposure mitigation instruments (Basel Accord II. The array of eligible instruments for exposure mitigation under the recommended approaches for their treatment becomes the essential element of economic capital requirements calculation, both in relation to certain types of risk, and in relation to aggregate exposure.

  10. Risk exposure mitigation: Approaches and recognised instruments (3

    Directory of Open Access Journals (Sweden)

    Matić Vesna

    2014-01-01

    Full Text Available The risk management function development in banks, along with the development of tools that banks can use throughout this process, has had the strong support in international standards, not only in the recommended approaches for calculating economic capital requirements, but also in the qualitatively new treatment of risk exposure mitigation instruments (Basel Accord II. The array of eligible instruments for exposure mitigation under the recommended approaches for their treatment becomes the essential element of economic capital requirements calculation, both in relation to certain types of risk, and in relation to aggregate exposure.

  11. Risk exposure mitigation: Approaches and recognised instruments (6

    Directory of Open Access Journals (Sweden)

    Matić Vesna

    2015-01-01

    Full Text Available The risk management function development in banks, along with the development of tools that banks can use throughout this process, has had the strong support in international standards, not only in the recommended approaches for calculating economic capital requirements, but also in the qualitatively new treatment of risk exposure mitigation instruments (Basel Accord II. The array of eligible instruments for exposure mitigation under the recommended approaches for their treatment becomes the essential element of economic capital requirements calculation, both in relation to certain types of risk, and in relation to aggregate exposure.

  12. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX – RAdiation Dose study (RAD-MATRIX)

    International Nuclear Information System (INIS)

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-01-01

    Background: Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. Methods: The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. Conclusions: The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes

  13. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX – RAdiation Dose study (RAD-MATRIX)

    Energy Technology Data Exchange (ETDEWEB)

    Sciahbasi, Alessandro, E-mail: alessandro.sciahbasi@fastwebnet.it [Interventional Cardiology, Sandro Pertini Hospital – ASL RMB, Rome (Italy); Calabrò, Paolo [Division of Cardiology - Department of Cardio-Thoracic Sciences - Second University of Naples (Italy); Sarandrea, Alessandro [HSE Management, Rome (Italy); Rigattieri, Stefano [Interventional Cardiology, Sandro Pertini Hospital – ASL RMB, Rome (Italy); Tomassini, Francesco [Department of Cardiology, Infermi Hospital, Rivoli (Italy); Sardella, Gennaro [La Sapienza University, Rome (Italy); Zavalloni, Dennis [UO Emodinamica e Cardiologia Invasiva, IRCCS, Istituto Clinico Humanitas, Rozzano (Italy); Cortese, Bernardo [Interventional Cardiology, Fatebenefratelli Hospital, Milan (Italy); Limbruno, Ugo [Cardiology Unit, Misericordia Hospital, Grosseto (Italy); Tebaldi, Matteo [Cardiology Department, University of Ferrara, Department of Cardiology (Italy); Gagnor, Andrea [Department of Cardiology, Infermi Hospital, Rivoli (Italy); Rubartelli, Paolo [Villa Scassi Hospital, Genova (Italy); Zingarelli, Antonio [San Martino Hospital, Genova (Italy); Valgimigli, Marco [Thoraxcenter, Rotterdam (Netherlands)

    2014-06-15

    Background: Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. Methods: The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. Conclusions: The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes.

  14. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX - RAdiation Dose study (RAD-MATRIX).

    Science.gov (United States)

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-06-01

    Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Stimulus Threat and Exposure Context Modulate the Effect of Mere Exposure on Approach Behaviors.

    Science.gov (United States)

    Young, Steven G; Jones, Isaiah F; Claypool, Heather M

    2016-01-01

    Mere-exposure (ME) research has found that initially neutral objects made familiar are preferred relative to novel objects. Recent work extends these preference judgments into the behavioral domain by illustrating that mere exposure prompts approach-oriented behavior toward familiar stimuli. However, no investigations have examined the effect of mere exposure on approach-oriented behavior toward threatening stimuli. The current work examines this issue and also explores how exposure context interacts with stimulus threat to influence behavioral tendencies. In two experiments participants were presented with both mere-exposed and novel stimuli and approach speed was assessed. In the first experiment, when stimulus threat was presented in a homogeneous format (i.e., participants viewed exclusively neutral or threatening stimuli), ME potentiated approach behaviors for both neutral and threatening stimuli. However, in the second experiment, in which stimulus threat was presented in a heterogeneous fashion (i.e., participants viewed both neutral and threatening stimuli), mere exposure facilitated approach only for initially neutral stimuli. These results suggest that ME effects on approach behaviors are highly context sensitive and depend on both stimulus valence and exposure context. Further implications of these findings for the ME literature are discussed.

  16. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  18. The brush model - a new approach to numerical modeling of matrix diffusion in fractured clay stone

    International Nuclear Information System (INIS)

    Lege, T.; Shao, H.

    1998-01-01

    A special approach for numerical modeling of contaminant transport in fractured clay stone is presented. The rock matrix and the fractures are simulated with individual formulations for FE grids and transport, coupled into a single model. The capacity of the rock matrix to take up contaminants is taken into consideration with a discrete simulation of matrix diffusion. Thus, the natural process of retardation due to matrix diffusion can be better simulated than by a standard introduction of an empirical parameter into the transport equation. Transport in groundwater in fractured clay stone can be simulated using a model called a 'brush model'. The 'brush handle' is discretized by 2-D finite elements. Advective-dispersive transport in groundwater in the fractures is assumed. The contaminant diffuses into 1D finite elements perpendicular to the fractures, i.e., the 'bristles of the brush'. The conclusion is drawn that matrix diffusion is an important property of fractured clay stone for contaminant retardation. (author)

  19. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

    NARCIS (Netherlands)

    Jongschaap, R.J.J.

    2001-01-01

    One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

  20. Integrating exposure into chemical alternatives assessment using a qualitative approach

    DEFF Research Database (Denmark)

    Greggs, Bill; Arnold, Scott; Burns, T. E.

    2016-01-01

    , other attributes beyond hazard are also important, including exposure, risk, life-cycle impacts, performance, cost, and social responsibility. Building on the 2014 recommendations by the U.S. National Academy of Sciences to improve AA decisions by including comparative exposure assessment, the HESI...... Sustainable Chemical Alternatives Technical Committee, which consists of scientists from academia, industry, government, and NGOs, has developed a qualitative comparative exposure approach. Conducting such a comparison can screen for alternatives that are expected to have a higher human or environmental...... not necessarily reflect the views or policies of the U.S. Environmental Protection Agency....

  1. Standardized approach for developing probabilistic exposure factor distributions

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  2. Developing Asbestos Job Exposure Matrix Using Occupation and Industry Specific Exposure Data (1984–2008 in Republic of Korea

    Directory of Open Access Journals (Sweden)

    Sangjun Choi

    2017-03-01

    Conclusion: The newly constructed GPJEM which is generated from actual domestic quantitative exposure data could be useful in evaluating historical exposure levels to asbestos and could contribute to improved prediction of asbestos-related diseases among Koreans.

  3. Creation of a retrospective job-exposure matrix using surrogate measures of exposure for a cohort of US career firefighters from San Francisco, Chicago and Philadelphia.

    Science.gov (United States)

    Dahm, Matthew M; Bertke, Stephen; Allee, Steve; Daniels, Robert D

    2015-09-01

    To construct a cohort-specific job-exposure matrix (JEM) using surrogate metrics of exposure for a cancer study on career firefighters from the Chicago, Philadelphia and San Francisco Fire Departments. Departmental work history records, along with data on historical annual fire-runs and hours, were collected from 1950 to 2009 and coded into separate databases. These data were used to create a JEM based on standardised job titles and fire apparatus assignments using several surrogate exposure metrics to estimate firefighters' exposure to the combustion byproducts of fire. The metrics included duration of exposure (cumulative time with a standardised exposed job title and assignment), fire-runs (cumulative events of potential fire exposure) and time at fire (cumulative hours of potential fire exposure). The JEM consisted of 2298 unique job titles alongside 16,174 fire apparatus assignments from the three departments, which were collapsed into 15 standardised job titles and 15 standardised job assignments. Correlations were found between fire-runs and time at fires (Pearson coefficient=0.92), duration of exposure and time at fires (Pearson coefficient=0.85), and duration of exposure and fire-runs (Pearson coefficient=0.82). Total misclassification rates were found to be between 16-30% when using duration of employment as an exposure surrogate, which has been traditionally used in most epidemiological studies, compared with using the duration of exposure surrogate metric. The constructed JEM successfully differentiated firefighters based on gradient levels of potential exposure to the combustion byproducts of fire using multiple surrogate exposure metrics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  5. Linear models in matrix form a hands-on approach for the behavioral sciences

    CERN Document Server

    Brown, Jonathon D

    2014-01-01

    This textbook is an approachable introduction to statistical analysis using matrix algebra. Prior knowledge of matrix algebra is not necessary. Advanced topics are easy to follow through analyses that were performed on an open-source spreadsheet using a few built-in functions. These topics include ordinary linear regression, as well as maximum likelihood estimation, matrix decompositions, nonparametric smoothers and penalized cubic splines. Each data set (1) contains a limited number of observations to encourage readers to do the calculations themselves, and (2) tells a coherent story based on statistical significance and confidence intervals. In this way, students will learn how the numbers were generated and how they can be used to make cogent arguments about everyday matters. This textbook is designed for use in upper level undergraduate courses or first year graduate courses. The first chapter introduces students to linear equations, then covers matrix algebra, focusing on three essential operations: sum ...

  6. A real-space stochastic density matrix approach for density functional electronic structure.

    Science.gov (United States)

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  7. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: broadening the matrix approach.

    NARCIS (Netherlands)

    Grootel, L. van; Wesel, F. van; O'Mara-Eves, A.; Thomas, J.; Hox, J.; Boeije, H.

    2017-01-01

    Background: This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the

  8. A Decision Analytic Approach to Exposure-Based Chemical ...

    Science.gov (United States)

    The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. The National Exposure Research Laboratory′s (NERL′s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in suppor

  9. A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics

    Science.gov (United States)

    Pujol, O.; Perez, J. P.

    2007-01-01

    The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…

  10. Off-shell two-particle scattering amplitude in the P-matrix approach

    International Nuclear Information System (INIS)

    Babenko, V.A.; Petrov, N.M.

    1988-01-01

    A generalization of the P-matrix approach which makes it possible to describe the interaction of two particles off the energy shell is proposed. Explicit separation in the wave function of a part corresponding to free motion yields a compact expression for the off-shell scattering amplitude and gives directly a method for separable expansion of the amplitude

  11. Matrix approach to consistency of the additive efficient normalization of semivalues

    NARCIS (Netherlands)

    Xu, G.; Driessen, Theo; Sun, H.; Sun, H.

    2007-01-01

    In fact the Shapley value is the unique efficient semivalue. This motivated Ruiz et al. to do additive efficient normalization for semivalues. In this paper, by matrix approach we derive the relationship between the additive efficient normalization of semivalues and the Shapley value. Based on the

  12. Upper arm elevation and repetitive shoulder movements: a general population job exposure matrix based on expert ratings and technical measurements.

    Science.gov (United States)

    Dalbøge, Annett; Hansson, Gert-Åke; Frost, Poul; Andersen, Johan Hviid; Heilskov-Hansen, Thomas; Svendsen, Susanne Wulff

    2016-08-01

    We recently constructed a general population job exposure matrix (JEM), The Shoulder JEM, based on expert ratings. The overall aim of this study was to convert expert-rated job exposures for upper arm elevation and repetitive shoulder movements to measurement scales. The Shoulder JEM covers all Danish occupational titles, divided into 172 job groups. For 36 of these job groups, we obtained technical measurements (inclinometry) of upper arm elevation and repetitive shoulder movements. To validate the expert-rated job exposures against the measured job exposures, we used Spearman rank correlations and the explained variance[Formula: see text] according to linear regression analyses (36 job groups). We used the linear regression equations to convert the expert-rated job exposures for all 172 job groups into predicted measured job exposures. Bland-Altman analyses were used to assess the agreement between the predicted and measured job exposures. The Spearman rank correlations were 0.63 for upper arm elevation and 0.64 for repetitive shoulder movements. The expert-rated job exposures explained 64% and 41% of the variance of the measured job exposures, respectively. The corresponding calibration equations were y=0.5%time+0.16×expert rating and y=27°/s+0.47×expert rating. The mean differences between predicted and measured job exposures were zero due to calibration; the 95% limits of agreement were ±2.9% time for upper arm elevation >90° and ±33°/s for repetitive shoulder movements. The updated Shoulder JEM can be used to present exposure-response relationships on measurement scales. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Cloud-Based DDoS HTTP Attack Detection Using Covariance Matrix Approach

    Directory of Open Access Journals (Sweden)

    Abdulaziz Aborujilah

    2017-01-01

    Full Text Available In this era of technology, cloud computing technology has become essential part of the IT services used the daily life. In this regard, website hosting services are gradually moving to the cloud. This adds new valued feature to the cloud-based websites and at the same time introduces new threats for such services. DDoS attack is one such serious threat. Covariance matrix approach is used in this article to detect such attacks. The results were encouraging, according to confusion matrix and ROC descriptors.

  14. A Delphi-matrix approach to SEA and its application within the tourism sector in Taiwan

    International Nuclear Information System (INIS)

    Kuo, N.-W.; Hsiao, T.-Y.; Yu, Y.-H.

    2005-01-01

    Strategic Environmental Assessment (SEA) is a procedural tool and within the framework of SEA, several different types of analytical methods can be used in the assessment. However, the impact matrix used currently in Taiwan has some disadvantages. Hence, a Delphi-matrix approach to SEA is proposed here to improve the performance of Taiwan's SEA. This new approach is based on the impact matrix combination with indicators of sustainability, and then the Delphi method is employed to collect experts' opinions. In addition, the assessment of National Floriculture Park Plan and Taiwan Flora 2008 Program is taken as an example to examine this new method. Although international exhibition is one of the important tourism (economic) activities, SEA is seldom about tourism sector. Finally, the Delphi-matrix approach to SEA for tourism development plan is established containing eight assessment topics and 26 corresponding categories. In summary, three major types of impacts: resources' usages, pollution emissions, and local cultures change are found. Resources' usages, such as water, electricity, and natural gas demand, are calculated on a per capita basis. Various forms of pollution resulting from this plan, such as air, water, soil, waste, and noise, are also identified

  15. Characterization of the Vibrio cholerae extracellular matrix: a top-down solid-state NMR approach.

    Science.gov (United States)

    Reichhardt, Courtney; Fong, Jiunn C N; Yildiz, Fitnat; Cegelski, Lynette

    2015-01-01

    Bacterial biofilms are communities of bacterial cells surrounded by a self-secreted extracellular matrix. Biofilm formation by Vibrio cholerae, the human pathogen responsible for cholera, contributes to its environmental survival and infectivity. Important genetic and molecular requirements have been identified for V. cholerae biofilm formation, yet a compositional accounting of these parts in the intact biofilm or extracellular matrix has not been described. As insoluble and non-crystalline assemblies, determinations of biofilm composition pose a challenge to conventional biochemical and biophysical analyses. The V. cholerae extracellular matrix composition is particularly complex with several proteins, complex polysaccharides, and other biomolecules having been identified as matrix parts. We developed a new top-down solid-state NMR approach to spectroscopically assign and quantify the carbon pools of the intact V. cholerae extracellular matrix using ¹³C CPMAS and ¹³C{(¹⁵N}, ¹⁵N{³¹P}, and ¹³C{³¹P}REDOR. General sugar, lipid, and amino acid pools were first profiled and then further annotated and quantified as specific carbon types, including carbonyls, amides, glycyl carbons, and anomerics. In addition, ¹⁵N profiling revealed a large amine pool relative to amide contributions, reflecting the prevalence of molecular modifications with free amine groups. Our top-down approach could be implemented immediately to examine the extracellular matrix from mutant strains that might alter polysaccharide production or lipid release beyond the cell surface; or to monitor changes that may accompany environmental variations and stressors such as altered nutrient composition, oxidative stress or antibiotics. More generally, our analysis has demonstrated that solid-state NMR is a valuable tool to characterize complex biofilm systems. Copyright © 2014. Published by Elsevier B.V.

  16. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  17. An expert-based job exposure matrix for large scale epidemiologic studies of primary hip and knee osteoarthritis

    DEFF Research Database (Denmark)

    Rubak, Tine Steen; Svendsen, Susanne Wulff; Andersen, Johan Hviid

    2014-01-01

    BACKGROUND: When conducting large scale epidemiologic studies, it is a challenge to obtain quantitative exposure estimates, which do not rely on self-report where estimates may be influenced by symptoms and knowledge of disease status. In this study we developed a job exposure matrix (JEM) for use...... in population studies of the work-relatedness of hip and knee osteoarthritis. METHODS: Based on all 2227 occupational titles in the Danish version of the International Standard Classification of Occupations (D-ISCO 88), we constructed 121 job groups comprising occupational titles with expected homogeneous....../day), and frequency of lifting loads weighing ≥20 kg (times/day). Weighted kappa statistics were used to evaluate inter-rater agreement on rankings of the job groups for four of these exposures (whole-body vibration could not be evaluated due to few exposed job groups). Two external experts checked the face validity...

  18. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  19. Aligning Animal Models of Clinical Germinal Matrix Hemorrhage, From Basic Correlation to Therapeutic Approach.

    Science.gov (United States)

    Lekic, Tim; Klebe, Damon; Pichon, Pilar; Brankov, Katarina; Sultan, Sally; McBride, Devin; Casel, Darlene; Al-Bayati, Alhamza; Ding, Yan; Tang, Jiping; Zhang, John H

    2017-01-01

    Germinal matrix hemorrhage is a leading cause of mortality and morbidity from prematurity. This brain region is vulnerable to bleeding and re-bleeding within the first 72 hours of preterm life. Cerebroventricular expansion of blood products contributes to the mechanisms of brain injury. Consequences include lifelong hydrocephalus, cerebral palsy, and intellectual disability. Unfortunately little is known about the therapeutic needs of this patient population. This review discusses the mechanisms of germinal matrix hemorrhage, the animal models utilized, and the potential therapeutic targets. Potential therapeutic approaches identified in pre-clinical investigations include corticosteroid therapy, iron chelator administration, and transforming growth factor-β pathway modulation, which all warrant further investigation. Thus, effective preclinical modeling is essential for elucidating and evaluating novel therapeutic approaches, ahead of clinical consideration. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Food allergy: practical approach on education and accidental exposure prevention.

    Science.gov (United States)

    Pádua, I; Moreira, A; Moreira, P; Barros, R

    2016-09-01

    Food allergies are a growing problem and currently the primary treatment of food allergy is avoidance of culprit foods. However, given the lack of information and education and also the ubiquitous nature of allergens, accidental exposures to food allergens are not uncommon. The fear of potential fatal reactions and the need of a proper avoidance leads in most of the cases to the limitation of leisure and social activities. This review aims to be a practical approach on education and accidental exposure prevention regarding activities like shopping, eating out, and travelling. The recommendations are focused especially on proper reading of food labels and the management of the disease, namely in restaurants and airplanes, concerning cross-contact and communication with other stakeholders. The implementation of effective tools is essential to manage food allergy outside home, avoid serious allergic reactions and minimize the disease's impact on individuals' quality of life.

  1. Teaching the extracellular matrix and introducing online databases within a multidisciplinary course with i-cell-MATRIX: A student-centered approach.

    Science.gov (United States)

    Sousa, João Carlos; Costa, Manuel João; Palha, Joana Almeida

    2010-03-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure. Internet-available resources can bridge the division between the molecular details and ECM's biological properties and associated processes. This article presents an approach to teach the ECM developed for first year medical undergraduates who, working in teams: (i) Explore a specific molecular component of the matrix, (ii) identify a disease in which the component is implicated, (iii) investigate how the component's structure/function contributes to ECM' supramolecular organization in physiological and in pathological conditions, and (iv) share their findings with colleagues. The approach-designated i-cell-MATRIX-is focused on the contribution of individual components to the overall organization and biological functions of the ECM. i-cell-MATRIX is student centered and uses 5 hours of class time. Summary of results and take home message: A "1-minute paper" has been used to gather student feedback on the impact of i-cell-MATRIX. Qualitative analysis of student feedback gathered in three consecutive years revealed that students appreciate the approach's reliance on self-directed learning, the interactivity embedded and the demand for deeper insights on the ECM. Learning how to use internet biomedical resources is another positive outcome. Ninety percent of students recommend the activity for subsequent years. i-cell-MATRIX is adaptable by other medical schools which may be looking for an approach that achieves higher student engagement with the ECM. Copyright © 2010 International Union of Biochemistry and Molecular Biology, Inc.

  2. Hessian matrix approach for determining error field sensitivity to coil deviations

    Science.gov (United States)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  3. Alexithymia tendencies and mere exposure alter social approachability judgments.

    Science.gov (United States)

    Campbell, Darren W; McKeen, Nancy A

    2011-04-01

    People have a fundamental motivation for social connection and social engagement, but how do they decide whom to approach in ambiguous social situations? Subjective feelings often influence such decisions, but people vary in awareness of their feelings. We evaluated two opposing hypotheses based on visual familiarity effects and emotional awareness on social approachability judgments. These hypotheses differ in their interpretation of the familiarity or mere exposure effect with either an affective or cognitive interpretation. The responses of our 128-student sample supported the cognitive interpretation. Lower emotional awareness or higher alexithymia was associated with higher approachability judgments to familiarized faces and lower approachability judgments to novel faces. These findings were independent of the Big Five personality factors. The results indicate that individual differences in emotional awareness should be integrated into social decision-making models. The results also suggest that cognitive-perceptual alterations may underlie the poorer social outcomes associated with alexithymia. © 2011 The Authors. Journal of Personality © 2011, Wiley Periodicals, Inc.

  4. Expression of cytoskeletal and matrix genes following exposure to ionizing radiation: Dose-rate effects and protein synthesis requirements

    International Nuclear Information System (INIS)

    Woloschak, G.E.

    1994-01-01

    Experiments were designed to examine the effects Of radiation dose-rate and of the protein synthesis inhibitor cycloheximide on expression of cytoskeletal elements (γ- and β-actin and α-tubulin) and matrix elements (fibronectin) in Syrian hamster embryo cells. Past work from our laboratory had already demonstrated optimum time points and doses for examination of radiation effects on accumulation of specific transcripts. Our results here demonstrated little effect of dose-rate for JANUS fission spectrum neutrons when comparing expression of either α-tubulin or fibronectin genes. Past work had already documented similar results for expression of actin transcripts. Effects of cycloheximide revealed that cycloheximide repressed accumulation of α-tubulin following exposure to high dose-rate neutrons or γ rays; this did not occur following similar low dose-rate exposure. (2) Cycloheximide did not affect accumulation of MRNA for actin genes; and that cycloheximide abrogated the moderate induction of fibronectin-mRNA which occurred following exposure to γ rays and high dose-rate neutrons. These results suggest a role for labile proteins in the maintenance of α-tubulin and fibronectin MRNA accumulation following exposure to ionizing radiation. in addition, they suggest that the cellular/molecular response to low dose-rate neutrons may be different from the response to high dose-rate neutrons

  5. Expression of cytoskeletal and matrix genes following exposure to ionizing radiation: Dose-rate effects and protein synthesis requirements

    International Nuclear Information System (INIS)

    Woloschak, G.E.; Felcher, P.; Chang-Liu, Chin-Mei

    1992-01-01

    Experiments were designed to examine the effects of radiation dose-rate and of the protein synthesis inhibitor cycloheximide on expression of cytoskeletal elements (γ- and β-actin and α-tubulin) and matrix elements (fibronectin) in Syrian hamster embryo cells. Past work from our laboratory had already demonstrated optimum time points and doses for examination of radiation effects on accumulation of specific transcripts. Our results here demonstrated little effect of dose-rate for JANUS fission spectrum neutrons when comparing expression of either α-tubulin or fibronectin genes. Past work had already documented similar results for expression of actin transcripts. Effects of cycloheximide, however, revealed several interesting and novel findings: (1) Cycloheximide repressed accumulation of α-tubulin following exposure to high dose-rate neutrons or γ rays; this did not occur following similar low dose-rate exposure (2) Cycloheximide did not affect accumulation of mRNA for actin genes. Cycloheximide abrogated the moderate induction of fibronectin-mRNA which occurred following exposure to γ rays and high dose-rate neutrons. These results suggest a role for labile proteins in the maintenance of α-tubulin and fibronectin mRNA accumulation following exposure to ionizing radiation. In addition, they suggest that the cellular/molecular response to low dose-rate neutrons may be different from the response to high dose-rate neutrons

  6. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  7. Quasi-particle entanglement: redefinition of the vacuum and reduced density matrix approach

    International Nuclear Information System (INIS)

    Samuelsson, P; Sukhorukov, E V; Buettiker, M

    2005-01-01

    A scattering approach to entanglement in mesoscopic conductors with independent fermionic quasi-particles is discussed. We focus on conductors in the tunnelling limit, where a redefinition of the quasi-particle vacuum transforms the wavefunction from a many-body product state of non-interacting particles to a state describing entangled two-particle excitations out of the new vacuum (Samuelsson, Sukhorukov and Buettiker 2003 Phys. Rev. Lett. 91 157002). The approach is illustrated with two examples: (i) a normal-superconducting system, where the transformation is made between Bogoliubov-de Gennes quasi-particles and Cooper pairs, and (ii) a normal system, where the transformation is made between electron quasi-particles and electron-hole pairs. This is compared to a scheme where an effective two-particle state is derived from the manybody scattering state by a reduced density matrix approach

  8. Sex and Adolescent Ethanol Exposure Influence Pavlovian Conditioned Approach.

    Science.gov (United States)

    Madayag, Aric C; Stringfield, Sierra J; Reissner, Kathryn J; Boettiger, Charlotte A; Robinson, Donita L

    2017-04-01

    Alcohol use among adolescents is widespread and a growing concern due to long-term behavioral deficits, including altered Pavlovian behavior, that potentially contribute to addiction vulnerability. We tested the hypothesis that adolescent intermittent ethanol (AIE) exposure alters Pavlovian behavior in males and females as measured by a shift from goal-tracking to sign-tracking. Additionally, we investigated GLT-1, an astrocytic glutamate transporter, as a potential contributor to a sign-tracking phenotype. Male and female Sprague-Dawley rats were exposed to AIE (5 g/kg, intragastric) or water intermittently 2 days on and 2 days off from postnatal day (P) 25 to 54. Around P70, animals began 20 daily sessions of Pavlovian conditioned approach (PCA), where they learned that a cue predicted noncontingent reward delivery. Lever pressing indicated interaction with the cue, or sign-tracking, and receptacle entries indicated approach to the reward delivery location, or goal-tracking. To test for effects of AIE on nucleus accumbens (NAcc) excitatory signaling, we isolated membrane subfractions and measured protein levels of the glutamate transporter GLT-1 after animals completed behavior as a measure of glutamate homeostasis. Females exhibited elevated sign-tracking compared to males with significantly more lever presses, faster latency to first lever press, and greater probability to lever press in a trial. AIE significantly increased lever pressing while blunting goal-tracking, as indicated by fewer cue-evoked receptacle entries, slower latency to receptacle entry, and lower probability to enter the receptacle in a trial. No significant sex-by-exposure interactions were observed in sign- or goal-tracking metrics. Moreover, we found no significant effects of sex or exposure on membrane GLT-1 expression in the NAcc. Females exhibited enhanced sign-tracking compared to males, while AIE decreased goal-tracking compared to control exposure. Our findings support the

  9. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  10. Combinatorial theory of the semiclassical evaluation of transport moments. I. Equivalence with the random matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

    2013-11-15

    To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.

  11. Coupling-matrix approach to the Chern number calculation in disordered systems

    International Nuclear Information System (INIS)

    Zhang Yi-Fu; Ju Yan; Sheng Li; Shen Rui; Xing Ding-Yu; Yang Yun-You; Sheng Dong-Ning

    2013-01-01

    The Chern number is often used to distinguish different topological phases of matter in two-dimensional electron systems. A fast and efficient coupling-matrix method is designed to calculate the Chern number in finite crystalline and disordered systems. To show its effectiveness, we apply the approach to the Haldane model and the lattice Hofstadter model, and obtain the correct quantized Chern numbers. The disorder-induced topological phase transition is well reproduced, when the disorder strength is increased beyond the critical value. We expect the method to be widely applicable to the study of topological quantum numbers. (rapid communication)

  12. Matrix Elements of One- and Two-Body Operators in the Unitary Group Approach (I)-Formalism

    Institute of Scientific and Technical Information of China (English)

    DAI Lian-Rong; PAN Feng

    2001-01-01

    The tensor algebraic method is used to derive general one- and two-body operator matrix elements within the Un representations, which are useful in the unitary group approach to the configuration interaction problems of quantum many-body systems.

  13. Matrix Population Model for Estimating Effects from Time-Varying Aquatic Exposures: Technical Documentation

    Science.gov (United States)

    The Office of Pesticide Programs models daily aquatic pesticide exposure values for 30 years in its risk assessments. However, only a fraction of that information is typically used in these assessments. The population model employed herein is a deterministic, density-dependent pe...

  14. Ecological risk assessment of agricultural soils for the definition of soil screening values: A comparison between substance-based and matrix-based approaches.

    Science.gov (United States)

    Pivato, Alberto; Lavagnolo, Maria Cristina; Manachini, Barbara; Vanin, Stefano; Raga, Roberto; Beggio, Giovanni

    2017-04-01

    The Italian legislation on contaminated soils does not include the Ecological Risk Assessment (ERA) and this deficiency has important consequences for the sustainable management of agricultural soils. The present research compares the results of two ERA procedures applied to agriculture (i) one based on the "substance-based" approach and (ii) a second based on the "matrix-based" approach. In the former the soil screening values (SVs) for individual substances were derived according to institutional foreign guidelines. In the latter, the SVs characterizing the whole-matrix were derived originally by the authors by means of experimental activity. The results indicate that the "matrix-based" approach can be efficiently implemented in the Italian legislation for the ERA of agricultural soils. This method, if compared to the institutionalized "substance based" approach is (i) comparable in economic terms and in testing time, (ii) is site specific and assesses the real effect of the investigated soil on a battery of bioassays, (iii) accounts for phenomena that may radically modify the exposure of the organisms to the totality of contaminants and (iv) can be considered sufficiently conservative.

  15. Ecological risk assessment of agricultural soils for the definition of soil screening values: A comparison between substance-based and matrix-based approaches

    Directory of Open Access Journals (Sweden)

    Alberto Pivato

    2017-04-01

    Full Text Available The Italian legislation on contaminated soils does not include the Ecological Risk Assessment (ERA and this deficiency has important consequences for the sustainable management of agricultural soils. The present research compares the results of two ERA procedures applied to agriculture (i one based on the “substance-based” approach and (ii a second based on the “matrix-based” approach. In the former the soil screening values (SVs for individual substances were derived according to institutional foreign guidelines. In the latter, the SVs characterizing the whole-matrix were derived originally by the authors by means of experimental activity.The results indicate that the “matrix-based” approach can be efficiently implemented in the Italian legislation for the ERA of agricultural soils. This method, if compared to the institutionalized “substance based” approach is (i comparable in economic terms and in testing time, (ii is site specific and assesses the real effect of the investigated soil on a battery of bioassays, (iii accounts for phenomena that may radically modify the exposure of the organisms to the totality of contaminants and (iv can be considered sufficiently conservative. Keyword: Environmental science

  16. A different approach to evaluating health effects from radiation exposure

    International Nuclear Information System (INIS)

    Bond, V.P.; Sondhaus, C.A.; Feinendegen, L.E.

    1988-01-01

    Absorbed dose D is shown to be a composite variable, the product of the fraction of cells hit (I/sub H/) and the mean /open quotes/dose/close quotes/ (hit size) /ovr z/ to those cells. D is suitable for use with high level (HLE) to radiation and its resulting acute organ effects because, since I/sub H/ = 1.0, D approximates closely enough the mean energy density in the cell as well as in the organ. However, with low-level exposure (LLE) to radiation and its consequent probability of cancer induction from a single cell, stochastic delivery of energy to cells results in a wide distribution of hit sizes z, and the expected mean value, /ovr z/, is constant with exposure. Thus, with LLE, only I/sub H/ varies with D so that the apparent proportionality between /open quotes/dose/close quotes/ and the fraction of cells transformed is misleading. This proportionality therefore does not mean that any (cell) dose, no matter how small, can be lethal. Rather, it means that, in the exposure of a population of individual organisms consisting of the constituent relevant cells, there is a small probabililty of particle-cell interactions which transfer energy. The probability of a cell transforming and initiating a cancer can only be greater than zero if the hit size (/open quotes/dose of energy/close quotes/) to the cell is large enough. Otherwise stated, if the /open quotes/dose/close quotes/ is defined at the proper level of biological organization, namely, the cell and not the organ, only a large dose z to that cell is effective. The above precepts are utilized to develop a drastically different approach to evaluation oif risk from LLE, that holds promise of obviating any requirement for the components of the present system: absorbed organ dose, LET, a standard radiation, REB(Q), dose equivalent and rem. 12 refs., 11 figs

  17. A different approach to evaluating health effects from radiation exposure

    Energy Technology Data Exchange (ETDEWEB)

    Bond, V.P.; Sondhaus, C.A.; Feinendegen, L.E.

    1988-01-01

    Absorbed dose D is shown to be a composite variable, the product of the fraction of cells hit (I/sub H/) and the mean /open quotes/dose/close quotes/ (hit size) /ovr z/ to those cells. D is suitable for use with high level (HLE) to radiation and its resulting acute organ effects because, since I/sub H/ = 1.0, D approximates closely enough the mean energy density in the cell as well as in the organ. However, with low-level exposure (LLE) to radiation and its consequent probability of cancer induction from a single cell, stochastic delivery of energy to cells results in a wide distribution of hit sizes z, and the expected mean value, /ovr z/, is constant with exposure. Thus, with LLE, only I/sub H/ varies with D so that the apparent proportionality between /open quotes/dose/close quotes/ and the fraction of cells transformed is misleading. This proportionality therefore does not mean that any (cell) dose, no matter how small, can be lethal. Rather, it means that, in the exposure of a population of individual organisms consisting of the constituent relevant cells, there is a small probabililty of particle-cell interactions which transfer energy. The probability of a cell transforming and initiating a cancer can only be greater than zero if the hit size (/open quotes/dose of energy/close quotes/) to the cell is large enough. Otherwise stated, if the /open quotes/dose/close quotes/ is defined at the proper level of biological organization, namely, the cell and not the organ, only a large dose z to that cell is effective. The above precepts are utilized to develop a drastically different approach to evaluation oif risk from LLE, that holds promise of obviating any requirement for the components of the present system: absorbed organ dose, LET, a standard radiation, REB(Q), dose equivalent and rem. 12 refs., 11 figs.

  18. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  19. A new approach to a global fit of the CKM matrix

    Energy Technology Data Exchange (ETDEWEB)

    Hoecker, A.; Lacker, H.; Laplace, S. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Le Diberder, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, 75 - Paris (France)

    2001-05-01

    We report on a new approach to a global CKM matrix analysis taking into account most recent experimental and theoretical results. The statistical framework (Rfit) developed in this paper advocates frequentist statistics. Other approaches, such as Bayesian statistics or the 95% CL scan method are also discussed. We emphasize the distinction of a model testing and a model dependent, metrological phase in which the various parameters of the theory are estimated. Measurements and theoretical parameters entering the global fit are thoroughly discussed, in particular with respect to their theoretical uncertainties. Graphical results for confidence levels are drawn in various one and two-dimensional parameter spaces. Numerical results are provided for all relevant CKM parameterizations, the CKM elements and theoretical input parameters. Predictions for branching ratios of rare K and B meson decays are obtained. A simple, predictive SUSY extension of the Standard Model is discussed. (authors)

  20. Density-matrix approach for the electroluminescence of molecules in a scanning tunneling microscope.

    Science.gov (United States)

    Tian, Guangjun; Liu, Ji-Cai; Luo, Yi

    2011-04-29

    The electroluminescence (EL) of molecules confined inside a nanocavity in the scanning tunneling microscope possesses many intriguing but unexplained features. We present here a general theoretical approach based on the density-matrix formalism to describe the EL from molecules near a metal surface induced by both electron tunneling and localized surface plasmon excitations simultaneously. It reveals the underlying physical mechanism for the external bias dependent EL. The important role played by the localized surface plasmon on the EL is highlighted. Calculations for porphyrin derivatives have reproduced corresponding experimental spectra and nicely explained the observed unusual large variation of emission spectral profiles. This general theoretical approach can find many applications in the design of molecular electronic and photonic devices.

  1. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  2. Shaping the Future Landscape: Catchment Systems Engineering and the Decision Support Matrix Approach

    Science.gov (United States)

    Hewett, Caspar; Quinn, Paul; Wilkinson, Mark; Wainwright, John

    2017-04-01

    Land degradation is widely recognised as one of the great environmental challenges facing humanity today, much of which is directly associated with human activity. The negative impacts of climate change and of the way in which we have engineered the landscape through, for example, agriculture intensification and deforestation, need to be addressed. However, the answer is not a simple matter of doing the opposite of current practice. Nor is non-intervention a viable option. There is a need to bring together approaches from the natural and social sciences both to understand the issues and to act to solve real problems. We propose combining a Catchment Systems Engineering (CSE) approach that builds on existing approaches such as Natural Water Retention Measures, Green infrastructure and Nature-Based Solutions with a multi-scale framework for decision support that has been successfully applied to diffuse pollution and flood risk management. The CSE philosophy follows that of Earth Systems Engineering and Management, which aims to engineer and manage complex coupled human-natural systems in a highly integrated, rational manner. CSE is multi-disciplinary, and necessarily involves a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. It offers a rational approach which accepts the fact that we need to engineer and act to improve the functioning of the existing catchment entity on which we rely. The decision support framework proposed draws on physical and mathematical modelling; Participatory Action Research; and demonstration sites at which practical interventions are implemented. It is predicated on the need to work with stakeholders to co-produce knowledge that leads to proactive interventions to reverse the land degradation we observe today while sustaining the agriculture humanity needs. The philosophy behind CSE and examples of where it has been applied successfully are presented. The Decision Support Matrix

  3. Matrix shaped pulsed laser deposition: New approach to large area and homogeneous deposition

    Energy Technology Data Exchange (ETDEWEB)

    Akkan, C.K.; May, A. [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany); Hammadeh, M. [Department for Obstetrics, Gynecology and Reproductive Medicine, IVF Laboratory, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Abdul-Khaliq, H. [Clinic for Pediatric Cardiology, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Aktas, O.C., E-mail: cenk.aktas@inm-gmbh.de [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany)

    2014-05-01

    Pulsed laser deposition (PLD) is one of the well-established physical vapor deposition methods used for synthesis of ultra-thin layers. Especially PLD is suitable for the preparation of thin films of complex alloys and ceramics where the conservation of the stoichiometry is critical. Beside several advantages of PLD, inhomogeneity in thickness limits use of PLD in some applications. There are several approaches such as rotation of the substrate or scanning of the laser beam over the target to achieve homogenous layers. On the other hand movement and transition create further complexity in process parameters. Here we present a new approach which we call Matrix Shaped PLD to control the thickness and homogeneity of deposited layers precisely. This new approach is based on shaping of the incoming laser beam by a microlens array and a Fourier lens. The beam is split into much smaller multi-beam array over the target and this leads to a homogenous plasma formation. The uniform intensity distribution over the target yields a very uniform deposit on the substrate. This approach is used to deposit carbide and oxide thin films for biomedical applications. As a case study coating of a stent which has a complex geometry is presented briefly.

  4. Electrically tunable spin polarization in silicene: A multi-terminal spin density matrix approach

    International Nuclear Information System (INIS)

    Chen, Son-Hsien

    2016-01-01

    Recent realized silicene field-effect transistor yields promising electronic applications. Using a multi-terminal spin density matrix approach, this paper presents an analysis of the spin polarizations in a silicene structure of the spin field-effect transistor by considering the intertwined intrinsic and Rashba spin–orbit couplings, gate voltage, Zeeman splitting, as well as disorder. Coexistence of the stagger potential and intrinsic spin–orbit coupling results in spin precession, making any in-plane polarization directions reachable by the gate voltage; specifically, the intrinsic coupling allows one to electrically adjust the in-plane components of the polarizations, while the Rashba coupling to adjust the out-of-plan polarizations. Larger electrically tunable ranges of in-plan polarizations are found in oppositely gated silicene than in the uniformly gated silicene. Polarizations in different phases behave distinguishably in weak disorder regime, while independent of the phases, stronger disorder leads to a saturation value. - Highlights: • Density matrix with spin rotations enables multi-terminal arbitrary spin injections. • Gate-voltage tunable in-plane polarizations require intrinsic SO coupling. • Gate-voltage tunable out-of-plane polarizations require Rashba SO coupling. • Oppositely gated silicene yields a large tunable range of in-plan polarizations. • Polarizations in different phases behave distinguishably only in weak disorder.

  5. Self-consistent RPA and the time-dependent density matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Schuck, P. [Institut de Physique Nucleaire, Orsay (France); CNRS et Universite Joseph Fourier, Laboratoire de Physique et Modelisation des Milieux Condenses, Grenoble (France); Tohyama, M. [Kyorin University School of Medicine, Mitaka, Tokyo (Japan)

    2016-10-15

    The time-dependent density matrix (TDDM) or BBGKY (Bogoliubov, Born, Green, Kirkwood, Yvon) approach is decoupled and closed at the three-body level in finding a natural representation of the latter in terms of a quadratic form of two-body correlation functions. In the small amplitude limit an extended RPA coupled to an also extended second RPA is obtained. Since including two-body correlations means that the ground state cannot be a Hartree-Fock state, naturally the corresponding RPA is upgraded to Self-Consistent RPA (SCRPA) which was introduced independently earlier and which is built on a correlated ground state. SCRPA conserves all the properties of standard RPA. Applications to the exactly solvable Lipkin and the 1D Hubbard models show good performances of SCRPA and TDDM. (orig.)

  6. Resistance of a 1D random chain: Hamiltonian version of the transfer matrix approach

    Science.gov (United States)

    Dossetti-Romero, V.; Izrailev, F. M.; Krokhin, A. A.

    2004-01-01

    We study some mesoscopic properties of electron transport by employing one-dimensional chains and Anderson tight-binding model. Principal attention is paid to the resistance of finite-length chains with disordered white-noise potential. We develop a new version of the transfer matrix approach based on the equivalency of a discrete Schrödinger equation and a two-dimensional Hamiltonian map describing a parametric kicked oscillator. In the two limiting cases of ballistic and localized regime we demonstrate how analytical results for the mean resistance and its second moment can be derived directly from the averaging over classical trajectories of the Hamiltonian map. We also discuss the implication of the single parameter scaling hypothesis to the resistance.

  7. IRMA iterative relaxation matrix approach for NMR structure determination application to DNA fragments

    International Nuclear Information System (INIS)

    Koning, M.M.G.

    1990-01-01

    The subject of this thesis is the structure determination of DNA molecules in solution with the use of NMR. For this purpose a new relaxation matrix approach is introduced. The emphasis is on the interpretation of nuclear Overhauser effects (NOEs) in terms of proton-proton distances and related three dimensional structures. The DNA molecules studied are obligonucleotides, unmodifief as well as modified molecules bu UV radiation. From comparison with unmodified molecules it turned out that UV irradiation scarcely influences the helical structure of the DNA string. At one place of the string a nucleotide is rotated towards the high-ANTI conformation which results in a slight unwinding of the DNA string but sufficient for blocking of the normal reading of genetic information. (H.W.). 456 refs.; 50 figs.; 30 tabs

  8. An unprecedented multi attribute decision making using graph theory matrix approach

    Directory of Open Access Journals (Sweden)

    N.K. Geetha

    2018-02-01

    Full Text Available A frame work for investigating the best combination of functioning parameters on a variable compression ratio diesel engine is proposed in the present study using a multi attribute optimization methodology, Graph Theory Matrix Approach. The functioning parameters, attributes, sub attributes and functioning variables of sub attributes are chosen based on expert’s opinion and literature review. The directed graphs are developed for attributes and sub attributes. The ‘Parameter Index’ was calculated for all trials to choose the best trial. The experimental results are verified with the theoretical data. Functioning parameters with combination of compression ratio of 17, fuel injection pressure of 20 N/mm2 and fuel injection pressure of 21°bTDC was found to be best. The proposed method allows the decision maker to systematically and logically find the best combination of functioning parameters.

  9. Application of Transfer Matrix Approach to Modeling and Decentralized Control of Lattice-Based Structures

    Science.gov (United States)

    Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea

    2015-01-01

    This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.

  10. Linear matrix inequality approach for synchronization control of fuzzy cellular neural networks with mixed time delays

    International Nuclear Information System (INIS)

    Balasubramaniam, P.; Kalpana, M.; Rakkiyappan, R.

    2012-01-01

    Fuzzy cellular neural networks (FCNNs) are special kinds of cellular neural networks (CNNs). Each cell in an FCNN contains fuzzy operating abilities. The entire network is governed by cellular computing laws. The design of FCNNs is based on fuzzy local rules. In this paper, a linear matrix inequality (LMI) approach for synchronization control of FCNNs with mixed delays is investigated. Mixed delays include discrete time-varying delays and unbounded distributed delays. A dynamic control scheme is proposed to achieve the synchronization between a drive network and a response network. By constructing the Lyapunov—Krasovskii functional which contains a triple-integral term and the free-weighting matrices method an improved delay-dependent stability criterion is derived in terms of LMIs. The controller can be easily obtained by solving the derived LMIs. A numerical example and its simulations are presented to illustrate the effectiveness of the proposed method. (interdisciplinary physics and related areas of science and technology)

  11. Resistance of a 1D random chain: Hamiltonian version of the transfer matrix approach

    International Nuclear Information System (INIS)

    Dossetti-Romero, V.; Izrailev, F.M.; Krokhin, A.A.

    2004-01-01

    We study some mesoscopic properties of electron transport by employing one-dimensional chains and Anderson tight-binding model. Principal attention is paid to the resistance of finite-length chains with disordered white-noise potential. We develop a new version of the transfer matrix approach based on the equivalency of a discrete Schroedinger equation and a two-dimensional Hamiltonian map describing a parametric kicked oscillator. In the two limiting cases of ballistic and localized regime we demonstrate how analytical results for the mean resistance and its second moment can be derived directly from the averaging over classical trajectories of the Hamiltonian map. We also discuss the implication of the single parameter scaling hypothesis to the resistance

  12. An linear matrix inequality approach to global synchronisation of non-parameter perturbations of multi-delay Hopfield neural network

    International Nuclear Information System (INIS)

    Shao Hai-Jian; Cai Guo-Liang; Wang Hao-Xiang

    2010-01-01

    In this study, a successful linear matrix inequality approach is used to analyse a non-parameter perturbation of multi-delay Hopfield neural network by constructing an appropriate Lyapunov-Krasovskii functional. This paper presents the comprehensive discussion of the approach and also extensive applications

  13. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    Science.gov (United States)

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Transient coupled calculations of the Molten Salt Fast Reactor using the Transient Fission Matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Laureau, A., E-mail: laureau.axel@gmail.com; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.

    2017-05-15

    Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.

  15. Transient coupled calculations of the Molten Salt Fast Reactor using the Transient Fission Matrix approach

    International Nuclear Information System (INIS)

    Laureau, A.; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.

    2017-01-01

    Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.

  16. Using a Similarity Matrix Approach to Evaluate the Accuracy of Rescaled Maps

    Directory of Open Access Journals (Sweden)

    Peijun Sun

    2018-03-01

    Full Text Available Rescaled maps have been extensively utilized to provide data at the appropriate spatial resolution for use in various Earth science models. However, a simple and easy way to evaluate these rescaled maps has not been developed. We propose a similarity matrix approach using a contingency table to compute three measures: overall similarity (OS, omission error (OE, and commission error (CE to evaluate the rescaled maps. The Majority Rule Based aggregation (MRB method was employed to produce the upscaled maps to demonstrate this approach. In addition, previously created, coarser resolution land cover maps from other research projects were also available for comparison. The question of which is better, a map initially produced at coarse resolution or a fine resolution map rescaled to a coarse resolution, has not been quantitatively investigated. To address these issues, we selected study sites at three different extent levels. First, we selected twelve regions covering the continental USA, then we selected nine states (from the whole continental USA, and finally we selected nine Agriculture Statistical Districts (ASDs (from within the nine selected states as study sites. Crop/non-crop maps derived from the USDA Crop Data Layer (CDL at 30 m as base maps were used for the upscaling and existing maps at 250 m and 1 km were utilized for the comparison. The results showed that a similarity matrix can effectively provide the map user with the information needed to assess the rescaling. Additionally, the upscaled maps can provide higher accuracy and better represent landscape pattern compared to the existing coarser maps. Therefore, we strongly recommend that an evaluation of the upscaled map and the existing coarser resolution map using a similarity matrix should be conducted before deciding which dataset to use for the modelling. Overall, extending our understanding on how to perform an evaluation of the rescaled map and investigation of the applicability

  17. Towards an integrated approach of pedestrian behaviour and exposure.

    Science.gov (United States)

    Papadimitriou, Eleonora

    2016-07-01

    In this paper, an integrated methodology for the analysis of pedestrian behaviour and exposure is proposed, allowing to identify and quantify the effect of pedestrian behaviour, road and traffic characteristics on pedestrian risk exposure, for each pedestrian and for populations of pedestrians. The paper builds on existing research on pedestrian exposure, namely the Routledge microscopic indicator, proposes adjustments to take into account road, traffic and human factors and extends the use of this indicator on area-wide level. Moreover, this paper uses integrated choice and latent variables (ICLV) models of pedestrian behaviour, taking into account road, traffic and human factors. Finally, a methodology is proposed for the integrated estimation of pedestrian behaviour and exposure on the basis of road, traffic and human factors. The method is tested with data from a field survey in Athens, Greece, which used pedestrian behaviour observations as well as a questionnaire on human factors of pedestrian behaviour. The data were used (i) to develop ICLV models of pedestrian behaviour and (ii) to estimate the behaviour and exposure of pedestrians for different road, traffic and behavioural scenarios. The results suggest that both pedestrian behaviour and exposure are largely defined by a small number of factors: road type, traffic volume and pedestrian risk-taking. The probability for risk-taking behaviour and the related exposure decrease in less demanding road and traffic environments. A synthesis of the results allows to enhance the understanding of the interactions between behaviour and exposure of pedestrians and to identify conditions of increased risk exposure. These conditions include principal urban arterials (where risk-taking behaviour is low but the related exposure is very high) and minor arterials (where risk-taking behaviour is more frequent, and the related exposure is still high). A "paradox" of increased risk-taking behaviour of pedestrians with low

  18. HESI pilot project: Testing a qualitative approach for incorporating exposure into alternatives assessment

    DEFF Research Database (Denmark)

    Greggs, Bill; Arnold, Scott; Burns, Thomas J.

    -quantitative exposure assessment on the alternatives being considered. This talk will demonstrate an approach for including chemical and product exposure information in a qualitative AA comparison. Starting from existing hazard AAs, a series of four exposure examples were examined to test the concept, to understand...

  19. Rapid Chondrocyte Isolation for Tissue Engineering Applications: The Effect of Enzyme Concentration and Temporal Exposure on the Matrix Forming Capacity of Nasal Derived Chondrocytes

    Directory of Open Access Journals (Sweden)

    Srujana Vedicherla

    2017-01-01

    Full Text Available Laboratory based processing and expansion to yield adequate cell numbers had been the standard in Autologous Disc Chondrocyte Transplantation (ADCT, Allogeneic Juvenile Chondrocyte Implantation (NuQu®, and Matrix-Induced Autologous Chondrocyte Implantation (MACI. Optimizing cell isolation is a key challenge in terms of obtaining adequate cell numbers while maintaining a vibrant cell population capable of subsequent proliferation and matrix elaboration. However, typical cell yields from a cartilage digest are highly variable between donors and based on user competency. The overall objective of this study was to optimize chondrocyte isolation from cartilaginous nasal tissue through modulation of enzyme concentration exposure (750 and 3000 U/ml and incubation time (1 and 12 h, combined with physical agitation cycles, and to assess subsequent cell viability and matrix forming capacity. Overall, increasing enzyme exposure time was found to be more detrimental than collagenase concentration for subsequent viability, proliferation, and matrix forming capacity (sGAG and collagen of these cells resulting in nonuniform cartilaginous matrix deposition. Taken together, consolidating a 3000 U/ml collagenase digest of 1 h at a ratio of 10 ml/g of cartilage tissue with physical agitation cycles can improve efficiency of chondrocyte isolation, yielding robust, more uniform matrix formation.

  20. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    Science.gov (United States)

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    International Nuclear Information System (INIS)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-01-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5–20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii 100 nm. We find good agreement (differences 100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10–30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies. - Highlights: • The GOS and T-matrix methods capture laboratory measurements of BC optical properties. • The GOS results are consistent with the T-matrix results for BC optical properties. • BC optical properties vary remarkably with coating structures and sizes during aging.

  2. The Regional-Matrix Approach to the Training of Highly Qualified Personnel for the Sustainable Development of the Mining Region

    Science.gov (United States)

    Zhernov, Evgeny; Nehoda, Evgenia

    2017-11-01

    The state, regional and industry approaches to the problem of personnel training for building an innovative knowledge economy at all levels that ensures sustainable development of the region are analyzed in the article using the cases of the Kemerovo region and the coal industry. A new regional-matrix approach to the training of highly qualified personnel is proposed, which allows to link the training systems with the regional economic matrix "natural resources - cognitive resources" developed by the author. A special feature of the new approach is the consideration of objective conditions and contradictions of regional systems of personnel training, which have formed as part of economic systems of regions differ-entiated in the matrix. The methodology of the research is based on the statement about the interconnectivity of general and local knowledge, from which the understanding of the need for a combination of regional, indus-try and state approaches to personnel training is derived. A new form of representing such a combination is the proposed approach, which is based on matrix analysis. The results of the research can be implemented in the practice of modernization of professional education of workers in the coal industry of the natural resources extractive region.

  3. On matrix-model approach to simplified Khovanov-Rozansky calculus

    Science.gov (United States)

    Morozov, A.; Morozov, And.; Popolitov, A.

    2015-10-01

    Wilson-loop averages in Chern-Simons theory (HOMFLY polynomials) can be evaluated in different ways - the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension) of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q = 1 the problem is reformulated in terms of fat (ribbon) graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q ≠ 1 most of these relations are broken (i.e. deformed in a still uncontrollable way), and only few are protected by Reidemeister invariance of Chern-Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections - far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.

  4. On matrix-model approach to simplified Khovanov–Rozansky calculus

    Directory of Open Access Journals (Sweden)

    A. Morozov

    2015-10-01

    Full Text Available Wilson-loop averages in Chern–Simons theory (HOMFLY polynomials can be evaluated in different ways – the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q=1 the problem is reformulated in terms of fat (ribbon graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q≠1 most of these relations are broken (i.e. deformed in a still uncontrollable way, and only few are protected by Reidemeister invariance of Chern–Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections – far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.

  5. Convergence analysis of directed signed networks via an M-matrix approach

    Science.gov (United States)

    Meng, Deyuan

    2018-04-01

    This paper aims at solving convergence problems on directed signed networks with multiple nodes, where interactions among nodes are described by signed digraphs. The convergence analysis is achieved by matrix-theoretic and graph-theoretic tools, in which M-matrices play a central role. The fundamental digon sign-symmetry assumption upon signed digraphs can be removed with the proposed analysis approach. Furthermore, necessary and sufficient conditions are established for semi-positive and positive stabilities of Laplacian matrices of signed digraphs, respectively. A benefit of this result is that given strong connectivity, a directed signed network can achieve bipartite consensus (or state stability) if and only if the signed digraph associated with it is structurally balanced (or unbalanced). If the interactions between nodes are described by a signed digraph only with spanning trees, a directed signed network can achieve interval bipartite consensus (or state stability) if and only if the signed digraph contains a structurally balanced (or unbalanced) rooted subgraph. Simulations are given to illustrate the developed results by considering signed networks associated with digon sign-unsymmetric signed digraphs.

  6. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  7. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  8. Immunoassay approach for diagnosis of exposure to pyrrolizidine alkaloids.

    Science.gov (United States)

    Li, Na; Zhang, Fan; Lian, Wei; Wang, Huali; Zheng, Jiang; Lin, Ge

    2017-07-03

    Numerous pyrrolizidine alkaloid (PA) poisoning cases have been documented worldwide. Protein covalent binding with reactive metabolites generated from metabolic activation of PAs to form pyrrole-protein adducts is suggested to be a primary mechanism of PA-induced toxicities. The present study aimed to develop antibodies for diagnosis of PA exposure. Polyclonal antibodies were raised in rabbits and proven to specifically recognize pyrrole-protein adducts regardless of amino acid residues modified by the reactive metabolites of PAs. The developed antibodies were successfully applied to detect pyrrole-protein adducts in blood samples obtained from PA-treated rats and exhibited a potential for the clinical diagnosis of PA exposure.

  9. Risks for the development of outcomes related to occupational allergies: an application of the asthma-specific job exposure matrix compared with self-reports and investigator scores on job-training-related exposure.

    Science.gov (United States)

    Suarthana, E; Heederik, D; Ghezzo, H; Malo, J-L; Kennedy, S M; Gautrin, D

    2009-04-01

    Risks for development of occupational sensitisation, bronchial hyper-responsiveness, rhinoconjunctival and chest symptoms at work associated with continued exposure to high molecular weight (HMW) allergens were estimated with three exposure assessment methods. A Cox regression analysis with adjustment for atopy and smoking habit was carried out in 408 apprentices in animal health technology, pastry making, and dental hygiene technology with an 8-year follow-up after training. The risk of continued exposure after training, estimated by the asthma-specific job exposure matrix (JEM), was compared with self-reports and investigator scores on job-training-related exposure. Associations between outcomes and work duration in job(s) related to training were also evaluated. Exposure to animal-derived HMW allergens, subsequent to the apprenticeship period, as estimated by the JEM, was associated with a significantly increased risk for occupational sensitisation (hazard ratio (HR) 6.4; 95% CI 2.3 to 18.2) and rhinoconjunctival symptoms at work (HR 2.6; 95% CI 1.1 to 6.2). Exposure to low molecular weight (LMW) agents significantly increased the risk of developing bronchial hyper-responsiveness (HR 2.3; 95% CI 1.1 to 5.4). Exposure verification appeared to be important to optimise the sensitivity and the specificity, as well as HRs produced by the JEM. Self-reports and investigator scores also indicated that further exposure to HMW allergens increased the risk of developing occupational allergies. The agreement between self-reports, investigator scores, and the JEM were moderate to good. There was no significant association between respiratory outcomes and work duration in jobs related to training. The asthma-specific JEM could estimate the risk of various outcomes of occupational allergies associated with exposure to HMW and LMW allergens, but it is relatively labour intensive. Exposure verification is an important integrated step in the JEM that optimised the performance of

  10. Study of the nuclear-coulomb low-energy scattering parameters on the basis of the p-matrix approach

    International Nuclear Information System (INIS)

    Babenko, V.A.; Petrov, N.M.

    1993-01-01

    The P-matrix approach application to the description of two charged strongly interacting particles nuclear-Coulomb scattering parameters is considered. The nuclear-Coulomb scattering length and effective range explicit expressions in terms of the P-matrix parameters are found. The nuclear-Coulomb low-energy parameters expansions in powers of small parameter β ≡ R/a b , involving terms with big logarithms, are obtained. The nuclear-Coulomb scattering length and effective range for the square-well and the delta-shell short range potentials are found in an explicit form. (author). 21 refs

  11. A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson

    International Nuclear Information System (INIS)

    Datta, Nilanjana; Kunz, Herve

    2004-01-01

    We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals

  12. A variational approach to operator and matrix Pade approximation. Applications to potential scattering and field theory

    International Nuclear Information System (INIS)

    Mery, P.

    1977-01-01

    The operator and matrix Pade approximation are defined. The fact that these approximants can be derived from the Schwinger variational principle is emphasized. In potential theory, using this variational aspect it is shown that the matrix Pade approximation allow to reproduce the exact solution of the Lippman-Schwinger equation with any required accuracy taking only into account the knowledge of the first two coefficients in the Born expansion. The deep analytic structure of this variational matrix Pade approximation (hyper Pade approximation) is discussed

  13. Reduced density matrix functional theory via a wave function based approach

    Energy Technology Data Exchange (ETDEWEB)

    Schade, Robert; Bloechl, Peter [Institute for Theoretical Physics, Clausthal University of Technology, Clausthal (Germany); Pruschke, Thomas [Institute for Theoretical Physics, University of Goettingen, Goettingen (Germany)

    2016-07-01

    We propose a new method for the calculation of the electronic and atomic structure of correlated electron systems based on reduced density matrix functional theory (rDMFT). The density-matrix functional is evaluated on the fly using Levy's constrained search formalism. The present implementation rests on a local approximation of the interaction reminiscent to that of dynamical mean field theory (DMFT). We focus here on additional approximations to the exact density-matrix functional in the local approximation and evaluate their performance.

  14. Strong, Weak and Branching Bisimulation for Transition Systems and Markov Reward Chains: A Unifying Matrix Approach

    Directory of Open Access Journals (Sweden)

    Nikola Trčka

    2009-12-01

    Full Text Available We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are standardly presented in real matrix theory. By interpreting the obtained matrix conditions for bisimulations in this setting, we automatically obtain the definitions of strong, weak, and branching bisimulation for Markov reward chains. The obtained strong and weak bisimulations are shown to coincide with some existing notions, while the obtained branching bisimulation is new, but its usefulness is questionable.

  15. Assessing REDD+ performance of countries with low monitoring capacities: the matrix approach

    Science.gov (United States)

    Bucki, M.; Cuypers, D.; Mayaux, P.; Achard, F.; Estreguil, C.; Grassi, G.

    2012-03-01

    Estimating emissions from deforestation and degradation of forests in many developing countries is so uncertain that the effects of changes in forest management could remain within error ranges (i.e. undetectable) for several years. Meanwhile UNFCCC Parties need consistent time series of meaningful performance indicators to set credible benchmarks and allocate REDD+ incentives to the countries, programs and activities that actually reduce emissions, while providing social and environmental benefits. Introducing widespread measuring of carbon in forest land (which would be required to estimate more accurately changes in emissions from degradation and forest management) will take time and considerable resources. To ensure the overall credibility and effectiveness of REDD+, parties must consider the design of cost-effective systems which can provide reliable and comparable data on anthropogenic forest emissions. Remote sensing can provide consistent time series of land cover maps for most non-Annex-I countries, retrospectively. These maps can be analyzed to identify the forests that are intact (i.e. beyond significant human influence), and whose fragmentation could be a proxy for degradation. This binary stratification of forests biomes (intact/non-intact), a transition matrix and the use of default carbon stock change factors can then be used to provide initial estimates of trends in emission changes. A proof-of-concept is provided for one biome of the Democratic Republic of the Congo over a virtual commitment period (2005-2010). This approach could allow assessment of the performance of the five REDD+ activities (deforestation, degradation, conservation, management and enhancement of forest carbon stocks) in a spatially explicit, verifiable manner. Incentives could then be tailored to prioritize activities depending on the national context and objectives.

  16. Assessing REDD+ performance of countries with low monitoring capacities: the matrix approach

    International Nuclear Information System (INIS)

    Bucki, M; Cuypers, D; Mayaux, P; Achard, F; Estreguil, C; Grassi, G

    2012-01-01

    Estimating emissions from deforestation and degradation of forests in many developing countries is so uncertain that the effects of changes in forest management could remain within error ranges (i.e. undetectable) for several years. Meanwhile UNFCCC Parties need consistent time series of meaningful performance indicators to set credible benchmarks and allocate REDD+ incentives to the countries, programs and activities that actually reduce emissions, while providing social and environmental benefits. Introducing widespread measuring of carbon in forest land (which would be required to estimate more accurately changes in emissions from degradation and forest management) will take time and considerable resources. To ensure the overall credibility and effectiveness of REDD+, parties must consider the design of cost-effective systems which can provide reliable and comparable data on anthropogenic forest emissions. Remote sensing can provide consistent time series of land cover maps for most non-Annex-I countries, retrospectively. These maps can be analyzed to identify the forests that are intact (i.e. beyond significant human influence), and whose fragmentation could be a proxy for degradation. This binary stratification of forests biomes (intact/non-intact), a transition matrix and the use of default carbon stock change factors can then be used to provide initial estimates of trends in emission changes. A proof-of-concept is provided for one biome of the Democratic Republic of the Congo over a virtual commitment period (2005–2010). This approach could allow assessment of the performance of the five REDD+ activities (deforestation, degradation, conservation, management and enhancement of forest carbon stocks) in a spatially explicit, verifiable manner. Incentives could then be tailored to prioritize activities depending on the national context and objectives. (letter)

  17. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    Science.gov (United States)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  18. Approaches for occupational exposures during the decontamination of urban areas

    International Nuclear Information System (INIS)

    Silva, D.N.G da.; Guimarães, J.R.D.; Rochedo, E.R.R.

    2015-01-01

    The occurrence of various accidents involving radioactive material and the performance of the staff responsible for the radiological protection of the public have highlighted the need for prior planning for the assessment of public exposure and pre-defined guidelines for the application of more appropriate protective and remediation measures. This work is part of a project that aims to develop a multi-criteria tool to support decision-making processes in cases of nuclear or radiological accidents in Brazil. It describes the development of a model to assess occupational exposure related to decontamination procedures for the remediation of urban areas. Numerical values for model parameters were mainly based on previous developed works within the same project that includes a database describing main features of different procedures that may be used during the remediation phase after accidents and the definition of standard scenarios to perform simulations of accident consequences focusing members of the public doses. The model defined for estimation of occupational doses due to decontamination procedures shall be included in the multi-criteria tool under development in order to assess the effects of application of decontamination procedures in occupational exposure as compared to the averted doses to members of the public due to the same procedure. (authors)

  19. A Ranking Analysis/An Interlinking Approach of New Triangular Fuzzy Cognitive Maps and Combined Effective Time Dependent Matrix

    Science.gov (United States)

    Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.

    2018-04-01

    This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.

  20. Evidence at a glance: error matrix approach for overviewing available evidence

    DEFF Research Database (Denmark)

    Keus, Frederik; Wetterslev, Jørn; Gluud, Christian

    2010-01-01

    Clinical evidence continues to expand and is increasingly difficult to overview. We aimed at conceptualizing a visual assessment tool, i.e., a matrix for overviewing studies and their data in order to assess the clinical evidence at a glance....

  1. S-matrix approach to the equation of state of dilute nuclear matter

    Indian Academy of Sciences (India)

    2014-04-01

    matrix framework, a method is presented to calculate the equation of state of dilute warm nuclear matter. The result is a model-independent virial series for the pressure and density that systematically includes contributions from ...

  2. Matrix approach to the Shapley value and dual similar associated consistency

    NARCIS (Netherlands)

    Xu, G.; Driessen, Theo

    Replacing associated consistency in Hamiache's axiom system by dual similar associated consistency, we axiomatize the Shapley value as the unique value verifying the inessential game property, continuity and dual similar associated consistency. Continuing the matrix analysis for Hamiache's

  3. A Comparative Study of Collagen Matrix Density Effect on Endothelial Sprout Formation Using Experimental and Computational Approaches.

    Science.gov (United States)

    Shamloo, Amir; Mohammadaliha, Negar; Heilshorn, Sarah C; Bauer, Amy L

    2016-04-01

    A thorough understanding of determining factors in angiogenesis is a necessary step to control the development of new blood vessels. Extracellular matrix density is known to have a significant influence on cellular behaviors and consequently can regulate vessel formation. The utilization of experimental platforms in combination with numerical models can be a powerful method to explore the mechanisms of new capillary sprout formation. In this study, using an integrative method, the interplay between the matrix density and angiogenesis was investigated. Owing the fact that the extracellular matrix density is a global parameter that can affect other parameters such as pore size, stiffness, cell-matrix adhesion and cross-linking, deeper understanding of the most important biomechanical or biochemical properties of the ECM causing changes in sprout morphogenesis is crucial. Here, we implemented both computational and experimental methods to analyze the mechanisms responsible for the influence of ECM density on the sprout formation that is difficult to be investigated comprehensively using each of these single methods. For this purpose, we first utilized an innovative approach to quantify the correspondence of the simulated collagen fibril density to the collagen density in the experimental part. Comparing the results of the experimental study and computational model led to some considerable achievements. First, we verified the results of the computational model using the experimental results. Then, we reported parameters such as the ratio of proliferating cells to migrating cells that was difficult to obtain from experimental study. Finally, this integrative system led to gain an understanding of the possible mechanisms responsible for the effect of ECM density on angiogenesis. The results showed that stable and long sprouts were observed at an intermediate collagen matrix density of 1.2 and 1.9 mg/ml due to a balance between the number of migrating and proliferating

  4. An approach for assessing human exposures to chemical mixtures in the environment

    International Nuclear Information System (INIS)

    Rice, Glenn; MacDonell, Margaret; Hertzberg, Richard C.; Teuschler, Linda; Picel, Kurt; Butler, Jim; Chang, Young-Soo; Hartmann, Heidi

    2008-01-01

    Humans are exposed daily to multiple chemicals, including incidental exposures to complex chemical mixtures released into the environment and to combinations of chemicals that already co-exist in the environment because of previous releases from various sources. Exposures to chemical mixtures can occur through multiple pathways and across multiple routes. In this paper, we propose an iterative approach for assessing exposures to environmental chemical mixtures; it is similar to single-chemical approaches. Our approach encompasses two elements of the Risk Assessment Paradigm: Problem Formulation and Exposure Assessment. Multiple phases of the assessment occur in each element of the paradigm. During Problem Formulation, analysts identify and characterize the source(s) of the chemical mixture, ensure that dose-response and exposure assessment measures are concordant, and develop a preliminary evaluation of the mixture's fate. During Exposure Assessment, analysts evaluate the fate of the chemicals comprising the mixture using appropriate models and measurement data, characterize the exposure scenario, and estimate human exposure to the mixture. We also describe the utility of grouping the chemicals to be analyzed based on both physical-chemical properties and an understanding of environmental fate. In the article, we also highlight the need for understanding of changes in the mixture composition in the environment due to differential transport, differential degradation, and differential partitioning to other media. The section describes the application of the method to various chemical mixtures, highlighting issues associated with assessing exposures to chemical mixtures in the environment

  5. A Unique Mathematical Derivation of the Fundamental Laws of Nature Based on a New Algebraic-Axiomatic (Matrix Approach

    Directory of Open Access Journals (Sweden)

    Ramin Zahedi

    2017-09-01

    Full Text Available In this article, as a new mathematical approach to origin of the laws of nature, using a new basic algebraic axiomatic (matrix formalism based on the ring theory and Clifford algebras (presented in Section 2, “it is shown that certain mathematical forms of fundamental laws of nature, including laws governing the fundamental forces of nature (represented by a set of two definite classes of general covariant massive field equations, with new matrix formalisms, are derived uniquely from only a very few axioms.” In agreement with the rational Lorentz group, it is also basically assumed that the components of relativistic energy-momentum can only take rational values. In essence, the main scheme of this new mathematical axiomatic approach to the fundamental laws of nature is as follows: First, based on the assumption of the rationality of D-momentum and by linearization (along with a parameterization procedure of the Lorentz invariant energy-momentum quadratic relation, a unique set of Lorentz invariant systems of homogeneous linear equations (with matrix formalisms compatible with certain Clifford and symmetric algebras is derived. Then by an initial quantization (followed by a basic procedure of minimal coupling to space-time geometry of these determined systems of linear equations, a set of two classes of general covariant massive (tensor field equations (with matrix formalisms compatible with certain Clifford, and Weyl algebras is derived uniquely as well.

  6. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    Science.gov (United States)

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  7. Monitoring of the radon exposure in workplaces: Regulatory approaches

    International Nuclear Information System (INIS)

    Ettenhuber, E.

    2002-01-01

    Germany has a reference level of 2 10 6 Bqh/m 3 for radon in workplaces, corresponding to an annual dose of 6 mSv and a limit of 6 10 6 Bqh/m 3 , corresponding to 10 mSv/y. If the reference level is exceeded remedial action has to be taken and a new radon measurement should be carried out. If it is not possible to reduce the radon concentration below the reference level the competent authority has to be notified and monitoring of the radon concentrations performed. Germany has performed a study to investigate the exposure by natural radionuclides in workplaces in a large number of industrial activities, with a dose assessment of the workers under normal circumstances. They made a categorization of NORM activities in dose ranges of 20 mSv/y. Most of the NORM activities fall in the category <1 mSv/y when normal occupational hygiene measures are taken

  8. Harmonization of risk management approaches: radiation and chemical exposures

    Energy Technology Data Exchange (ETDEWEB)

    Srinivasan, P. [Bhabha Atomic Research Centre, Radiation Safety Systems Div., Mumbai (India)

    2006-07-01

    Assessment of occupational and public risk from the environmental pollutants like chemicals, radiation, etc demands that the effects be considered not only from each individual pollutant, but from the combination of all the pollutants. An integrated risk assessment system needs to be in place to have an overall risk perspective for the benefit of policy makers and decision takers to try to achieve risk reduction in totality. The basis for risk-based radiation dose limits is derived from epidemiological studies, which provide a rich source of data largely unavailable to chemical risk assessors. In addition, use of the principle of optimization as expressed in the ALARA concept has resulted in a safety culture, which is much more than just complying with stipulated limits. The conservative hypothesis of no-threshold dose-effect relation (ICRP) is universally assumed. The end-points and the severity of different classes of pollutants and even different pollutants in a same class vary over a wide range. Hence, it is difficult to arrive at a quantitative value for the net detriment that weighs the various types of end-points and various classes of pollutants. Once the risk due to other pollutants is quantified by some acceptable methodology, it can be expressed in terms of the Risk Equivalent Radiation Dose (R.E.R.D.) for easy comparison with options involving radiation exposure. This paper is an effort to use to quantify and present the risk due to exposure to chemicals and radiation in a common scale for the purpose of easy comparison to facilitate decision taking. (authors)

  9. Harmonization of risk management approaches: radiation and chemical exposures

    International Nuclear Information System (INIS)

    Srinivasan, P.

    2006-01-01

    Assessment of occupational and public risk from the environmental pollutants like chemicals, radiation, etc demands that the effects be considered not only from each individual pollutant, but from the combination of all the pollutants. An integrated risk assessment system needs to be in place to have an overall risk perspective for the benefit of policy makers and decision takers to try to achieve risk reduction in totality. The basis for risk-based radiation dose limits is derived from epidemiological studies, which provide a rich source of data largely unavailable to chemical risk assessors. In addition, use of the principle of optimization as expressed in the ALARA concept has resulted in a safety culture, which is much more than just complying with stipulated limits. The conservative hypothesis of no-threshold dose-effect relation (ICRP) is universally assumed. The end-points and the severity of different classes of pollutants and even different pollutants in a same class vary over a wide range. Hence, it is difficult to arrive at a quantitative value for the net detriment that weighs the various types of end-points and various classes of pollutants. Once the risk due to other pollutants is quantified by some acceptable methodology, it can be expressed in terms of the Risk Equivalent Radiation Dose (R.E.R.D.) for easy comparison with options involving radiation exposure. This paper is an effort to use to quantify and present the risk due to exposure to chemicals and radiation in a common scale for the purpose of easy comparison to facilitate decision taking. (authors)

  10. A matrix approach to the statistics of longevity in heterogeneous frailty models

    Directory of Open Access Journals (Sweden)

    Hal Caswell

    2014-09-01

    Full Text Available Background: The gamma-Gompertz model is a fixed frailty model in which baseline mortality increasesexponentially with age, frailty has a proportional effect on mortality, and frailty at birth follows a gamma distribution. Mortality selects against the more frail, so the marginal mortality rate decelerates, eventually reaching an asymptote. The gamma-Gompertz is one of a wider class of frailty models, characterized by the choice of baseline mortality, effects of frailty, distributions of frailty, and assumptions about the dynamics of frailty. Objective: To develop a matrix model to compute all the statistical properties of longevity from thegamma-Gompertz and related models. Methods: I use the vec-permutation matrix formulation to develop a model in which individuals are jointly classified by age and frailty. The matrix is used to project the age and frailty dynamicsof a cohort and the fundamental matrix is used to obtain the statistics of longevity. Results: The model permits calculation of the mean, variance, coefficient of variation, skewness and all moments of longevity, the marginal mortality and survivorship functions, the dynamics of the frailty distribution, and other quantities. The matrix formulation extends naturally to other frailty models. I apply the analysis to the gamma-Gompertz model (for humans and laboratory animals, the gamma-Makeham model, and the gamma-Siler model, and to a hypothetical dynamic frailty model characterized by diffusion of frailty with reflecting boundaries.The matrix model permits partitioning the variance in longevity into components due to heterogeneity and to individual stochasticity. In several published human data sets, heterogeneity accounts for less than 10Š of the variance in longevity. In laboratory populations of five invertebrate animal species, heterogeneity accounts for 46Š to 83Š ofthe total variance in longevity.

  11. Approaches for the generation of a covariance matrix for the Cf-252 fission-neutron spectrum

    International Nuclear Information System (INIS)

    Mannhart, W.

    1983-01-01

    After a brief retrospective glance is cast at the situation, the evaluation of the Cf-252 neutron spectrum with a complete covariance matrix based on the results of integral experiments is proposed. The different steps already taken in such an evaluation and work in progress are reviewed. It is shown that special attention should be given to the normalization of the neutron spectrum which must be reflected in the covariance matrix. The result of the least-squares adjustment procedure applied can easily be combined with the results of direct spectrum measurements and should be regarded as the first step in a new evaluation of the Cf-252 fission-neutron spectrum. (author)

  12. Direct calculation of resonance energies and widths using an R-matrix approach

    International Nuclear Information System (INIS)

    Schneider, B.I.

    1981-01-01

    A modified R-matrix technique is presented which determines the eigenvalues and widths of resonant states by the direct diagonalization of a complex, non-Hermitian matrix. The method utilizes only real basis sets and requires a minimum of complex arithmetic. The method is applied to two problems, a set of coupled square wells and the Pi/sub g/ resonance of N 2 in the static-exchange approximation. The results of the calculation are in good agreement with other methods and converge very quickly with basis-set size

  13. Determination of acrylamide levels in potato crisps and other snacks and exposure risk assessment through a Margin of Exposure approach.

    Science.gov (United States)

    Esposito, Francesco; Nardone, Antonio; Fasano, Evelina; Triassi, Maria; Cirillo, Teresa

    2017-10-01

    Potato crisps, corn-based extruded snacks and other savoury snacks are very popular products especially among younger generations. These products could be a potential source of acrylamide (AA), a toxic compound which could develop during frying and baking processes. The purpose of this study was the assessment of the dietary intake to AA across six groups of consumers divided according to age through the consumption of potato crisps and other snacks, in order to eventually evaluate the margin of exposure (MOE) related to neurotoxic and carcinogenic critical endpoints. Different brands of potato crisps and other popular snacks were analyzed through a matrix solid-phase dispersion method followed by a bromination step and GC-MS quantification. The concentration of detected AA ranged from 21 to 3444 ng g - 1 and the highest level occurred in potato crisps samples which showed a median value of 968 ng g - 1 . The risk characterization through MOE assessment revealed that five out of six consumers groups showed higher exposure values associated with an augmented carcinogenic risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Formal scattering theory approach to S-matrix relations in supersymmetric quantum mechanics

    International Nuclear Information System (INIS)

    Amado, R.D.; Cannata, F.; Dedonder, J.P.

    1988-01-01

    Combining the methods of scattering theory and supersymmetric quantum mechanics we obtain relations between the S matrix and its supersymmetric partner. These relations involve only asymptotic quantities and do not require knowledge of the dynamical details. For example, for coupled channels with no threshold differences the relations involve the asymptotic normalization constant of the bound state removed by supersymmetry

  15. Evidence of rock matrix back-diffusion and abiotic dechlorination using a field testing approach

    Science.gov (United States)

    Schaefer, Charles E.; Lippincott, David R.; Klammler, Harald; Hatfield, Kirk

    2018-02-01

    An in situ field demonstration was performed in fractured rock impacted with trichloroethene (TCE) and cis-1,2-dichloroethene (DCE) to assess the impacts of contaminant rebound after removing dissolved contaminants within hydraulically conductive fractures. Using a bedrock well pair spaced 2.4 m apart, TCE and DCE were first flushed with water to create a decrease in dissolved contaminant concentrations. While hydraulically isolating the well pair from upgradient contaminant impacts, contaminant rebound then was observed between the well pair over 151 days. The magnitude, but not trend, of TCE rebound was reasonably described by a matrix back-diffusion screening model that employed an effective diffusion coefficient and first-order abiotic TCE dechlorination rate constant that was based on bench-scale testing. Furthermore, a shift in the TCE:DCE ratio and carbon isotopic enrichment was observed during the rebound, suggesting that both biotic and abiotic dechlorination were occurring within the rock matrix. The isotopic data and back-diffusion model together served as a convincing argument that matrix back-diffusion was the mechanism responsible for the observed contaminant rebound. Results of this field demonstration highlight the importance and applicability of rock matrix parameters determined at the bench-scale, and suggest that carbon isotopic enrichment can be used as a line of evidence for abiotic dechlorination within rock matrices.

  16. A Transfer Learning Approach for Applying Matrix Factorization to Small ITS Datasets

    Science.gov (United States)

    Voß, Lydia; Schatten, Carlotta; Mazziotti, Claudia; Schmidt-Thieme, Lars

    2015-01-01

    Machine Learning methods for Performance Prediction in Intelligent Tutoring Systems (ITS) have proven their efficacy; specific methods, e.g. Matrix Factorization (MF), however suffer from the lack of available information about new tasks or new students. In this paper we show how this problem could be solved by applying Transfer Learning (TL),…

  17. Strong, weak and branching bisimulation for transition systems and Markov reward chains: A unifying matrix approach

    NARCIS (Netherlands)

    Trcka, N.; Andova, S.; McIver, A.; D'Argenio, P.; Cuijpers, P.J.L.; Markovski, J.; Morgan, C.; Núñez, M.

    2009-01-01

    We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are

  18. Information Architecture for the Web: The IA Matrix Approach to Designing Children's Portals.

    Science.gov (United States)

    Large, Andrew; Beheshti, Jamshid; Cole, Charles

    2002-01-01

    Presents a matrix that can serve as a tool for designing the information architecture of a Web portal in a logical and systematic manner. Highlights include interfaces; metaphors; navigation; interaction; information retrieval; and an example of a children's Web portal to provide access to museum information. (Author/LRW)

  19. Modifying exposure to smoking depicted in movies: a novel approach to preventing adolescent smoking.

    Science.gov (United States)

    Sargent, James D; Dalton, Madeline A; Heatherton, Todd; Beach, Mike

    2003-07-01

    Most behavioral approaches to adolescent smoking address the behavior directly. We explore an indirect approach: modifying exposure to portrayals of smoking in movies. To describe adolescents' exposure to smoking in movies and to examine factors that could modify such exposure. Occurrences of smoking were counted in each of 601 popular movies. Four thousand nine hundred ten northern New England junior high school students were asked to report which movies they had seen from a randomly generated subsample of 50 films, and responses were used to estimate exposure to the entire sample. Analysis The outcome variable was exposure to movie smoking, defined as the number of smoking occurrences seen. Risk factors for exposure included access to movies (movie channels, videotape use, and movie theater); parenting (R [restricted]-rated movie restrictions, television restrictions, parenting style); and characteristics of the child (age, sex, school performance, sensation-seeking propensity, rebelliousness, and self-esteem). We used multiple regression to assess the association between risk factors and exposure to movie smoking. Subjects had seen an average of 30% of the movie sample (interquartile range, 20%-44%), from which they were exposed to 1160 (interquartile range, 640-1970) occurrences of smoking. In a multivariate model, exposure to movie smoking increased (all P values Parent restriction on viewing R-rated movies resulted in a 50% reduction in exposure to movie smoking. There was no association between parenting style and exposure to movie smoking. Much of the protective effect of parent R-rated movie restriction on adolescent smoking was mediated through lower exposure to movie smoking. Adolescents see thousands of smoking depictions in movies, and this influences their attitudes and behavior. Exposure to movie smoking is reduced when parents limit movie access. Teaching parents to monitor and enforce movie access guidelines could reduce adolescent smoking in an

  20. Modelisation of transport in fractured media with a smeared fractures modeling approach: special focus on matrix diffusion process.

    Science.gov (United States)

    Fourno, A.; Grenier, C.; Benabderrahmane, H.

    2003-04-01

    Modeling flow and transport in natural fractured media is a difficult issue due among others to the complexity of the system, the particularities of the geometrical features, the strong parameter value contrasts between the fracture zones (flow zones) and the matrix zones (no flow zones). This lead to the development of dedicated tools like for instance discrete fracture network models (DFN). We follow here another line applicable for classical continuous modeling codes. The fracture network is not meshed here but presence of fractures is taken into account by means of continuous heterogeneous fields (permeability, porosity, head, velocity, concentration ...). This line, followed by different authors, is referred as smeared fracture approach and presents the following advantages: the approach is very versatile because no dedicated spatial discretization effort is required (we use a basic regular mesh, simulations can be done on a rough mesh saving computer time). This makes this kind of approach very promising for taking heterogeneity of properties as well as uncertainties into account within a Monte Carlo framework for instance. Furthermore, the geometry of the matrix blocks where transfers proceed by diffusion is fully taken into account contrary to classical simplified 1D approach for instance. Nevertheless continuous heterogeneous field representation of a fractured medium requires a homogenization process at the scale of the mesh considered. Literature proves that this step of homogenization for transport is still a challenging task. Consequently, the level precision of the results has to be estimated. We precedently proposed a new approach dedicated to Mixed and Hybrid Finite Element approach. This numerical scheme is very interesting for such highly heterogeneous media and in particular guaranties exact conservation of mass flow for each mesh leading to good transport results. We developed a smeared fractures approach to model flow and transport limited to

  1. Network trending; leadership, followership and neutrality among companies: A random matrix approach

    Science.gov (United States)

    Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.

    2016-11-01

    In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.

  2. Study of the validity of a job-exposure matrix for psychosocial work factors: results from the national French SUMER survey.

    Science.gov (United States)

    Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres

    2008-10-01

    To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.

  3. Active Approach Does not Add to the Effects of in Vivo Exposure

    NARCIS (Netherlands)

    van Uijen, Sophie; van den Hout, Marcel; Engelhard, Iris

    2015-01-01

    In exposure therapy, anxiety patients actively approach feared stimuli to violate their expectations of danger and reduce fear. Prior research has shown that stimulus evaluation and behavior are reciprocally related. This suggests that approach behavior itself may decrease fear. This study tested

  4. Chromium liquid waste inertization in an inorganic alkali activated matrix: Leaching and NMR multinuclear approach

    International Nuclear Information System (INIS)

    Ponzoni, Chiara; Lancellotti, Isabella; Barbieri, Luisa; Spinella, Alberto; Saladino, Maria Luisa; Martino, Delia Chillura; Caponetti, Eugenio; Armetta, Francesco; Leonelli, Cristina

    2015-01-01

    Highlights: • Inertization of chromium liquid waste in aluminosilicate matrix. • Water less inertization technique exploiting the waste water content. • Liquid waste inertization without drying step. • Long term stabilization study through leaching test. • SEM analysis and 29 Si and 27 Al MAS NMR in relation with long curing time. - Abstract: A class of inorganic binders, also known as geopolymers, can be obtained by alkali activation of aluminosilicate powders at room temperature. The process is affected by many parameters (curing time, curing temperature, relative humidity etc.) and leads to a resistant matrix usable for inertization of hazardous waste. In this study an industrial liquid waste containing a high amount of chromium (≈2.3 wt%) in the form of metalorganic salts is inertized into a metakaolin based geopolymer matrix. One of the innovative aspects is the exploitation of the water contained in the waste for the geopolymerization process. This avoided any drying treatment, a common step in the management of liquid hazardous waste. The evolution of the process - from the precursor dissolution to the final geopolymer matrix hardening - of different geopolymers containing a waste amount ranging from 3 to 20% wt and their capability to inertize chromium cations were studied by: i) the leaching tests, according to the EN 12,457 regulation, at different curing times (15, 28, 90 and 540 days) monitoring releases of chromium ions (Cr(III) and Cr(VI)) and the cations constituting the aluminosilicate matrix (Na, Si, Al); ii) the humidity variation for different curing times (15 and 540 days); iii) SEM characterization at different curing times (28 and 540 days); iv) the trend of the solution conductivity and pH during the leaching test; v) the characterization of the short-range ordering in terms of T−O−T bonds (where T is Al or Si) by 29 Si and 27 Al solid state magic-angle spinning nuclear magnetic resonance (ss MAS NMR) for geopolymers

  5. Chromium liquid waste inertization in an inorganic alkali activated matrix: Leaching and NMR multinuclear approach

    Energy Technology Data Exchange (ETDEWEB)

    Ponzoni, Chiara, E-mail: chiara.ponzoni@unimore.it [University of Modena and Reggio Emilia, Department of Engineering “Enzo Ferrari”, Modena (Italy); Lancellotti, Isabella; Barbieri, Luisa [University of Modena and Reggio Emilia, Department of Engineering “Enzo Ferrari”, Modena (Italy); Spinella, Alberto; Saladino, Maria Luisa [University of Palermo CGA-UniNetLab, Palermo (Italy); Martino, Delia Chillura [University of Palermo, Department STEBICEF, Palermo (Italy); Caponetti, Eugenio [University of Palermo CGA-UniNetLab, Palermo (Italy); University of Palermo, Department STEBICEF, Palermo (Italy); Armetta, Francesco [University of Palermo, Department STEBICEF, Palermo (Italy); Leonelli, Cristina [University of Modena and Reggio Emilia, Department of Engineering “Enzo Ferrari”, Modena (Italy)

    2015-04-09

    Highlights: • Inertization of chromium liquid waste in aluminosilicate matrix. • Water less inertization technique exploiting the waste water content. • Liquid waste inertization without drying step. • Long term stabilization study through leaching test. • SEM analysis and {sup 29}Si and {sup 27}Al MAS NMR in relation with long curing time. - Abstract: A class of inorganic binders, also known as geopolymers, can be obtained by alkali activation of aluminosilicate powders at room temperature. The process is affected by many parameters (curing time, curing temperature, relative humidity etc.) and leads to a resistant matrix usable for inertization of hazardous waste. In this study an industrial liquid waste containing a high amount of chromium (≈2.3 wt%) in the form of metalorganic salts is inertized into a metakaolin based geopolymer matrix. One of the innovative aspects is the exploitation of the water contained in the waste for the geopolymerization process. This avoided any drying treatment, a common step in the management of liquid hazardous waste. The evolution of the process - from the precursor dissolution to the final geopolymer matrix hardening - of different geopolymers containing a waste amount ranging from 3 to 20% wt and their capability to inertize chromium cations were studied by: i) the leaching tests, according to the EN 12,457 regulation, at different curing times (15, 28, 90 and 540 days) monitoring releases of chromium ions (Cr(III) and Cr(VI)) and the cations constituting the aluminosilicate matrix (Na, Si, Al); ii) the humidity variation for different curing times (15 and 540 days); iii) SEM characterization at different curing times (28 and 540 days); iv) the trend of the solution conductivity and pH during the leaching test; v) the characterization of the short-range ordering in terms of T−O−T bonds (where T is Al or Si) by {sup 29}Si and {sup 27}Al solid state magic-angle spinning nuclear magnetic resonance (ss MAS NMR) for

  6. A Chemical Activity Approach to Exposure and Risk Assessment of Chemicals

    DEFF Research Database (Denmark)

    Gobas, Frank A. P. C.; Mayer, Philipp; Parkerton, Thomas F.

    2018-01-01

    activity approach, its strengths and limitations, and provides examples of how this concept may be applied to the management of single chemicals and chemical mixtures. The examples demonstrate that the chemical activity approach provides a useful framework for 1) compiling and evaluating exposure......To support the goals articulated in the vision for exposure and risk assessment in the twenty-first century, we highlight the application of a thermodynamic chemical activity approach for the exposure and risk assessment of chemicals in the environment. The present article describes the chemical...... assessment. The article further illustrates that the chemical activity approach can support an adaptive management strategy for environmental stewardship of chemicals where “safe” chemical activities are established based on toxicological studies and presented as guidelines for environmental quality...

  7. Matrix-variational method: an efficient approach to bound state eigenproblems

    International Nuclear Information System (INIS)

    Gerck, E.; d'Oliveira, A.B.

    1978-11-01

    A new matrix-variational method for solving the radial Schroedinger equation is described. It consists in obtaining an adjustable matrix formulation for the boundary value differential equation, using a set of three functions that obey the boundary conditions. These functions are linearly combined at every three adjacents points to fit the true unknown eigenfunction by a variational technique. With the use of a new class of central differences, the exponential differences, tridiagonal or bidiagonal matrices are obtained. In the bidiagonal case, closed form expressions for the eigenvalues are given for the Coulomb, harmonic, linear, square-root and logarithmic potentials. The values obtained are within 0.1% of the true numerical value. The eigenfunction can be calculated using the eigenvectors to reconstruct the linear combination of the set functions [pt

  8. Forbidden transitions in excitation by electron impact in Co3+: an R-matrix approach

    International Nuclear Information System (INIS)

    Stancalie, V

    2011-01-01

    Collision strengths for the electron-impact excitation of forbidden transitions between 136 terms arising from 3d 6 , 3d 5 4s and 3d 5 4p configurations of Co 3+ have been calculated using the R-matrix method. The accuracy of a series of models for the target terms was considered, which form the basis for R-matrix collision calculations. The importance of including configuration interaction wave functions both in the target-state expansion and in the (N+1)-electron quadratically integrable function expansion is discussed. Collision strengths were calculated for incident electron energies up to 6 Ryd. These results are believed to be the first such values for this system and will be important for plasma modelling.

  9. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    Science.gov (United States)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  10. Systematic Correlation Matrix Evaluation (SCoMaE) - a bottom-up, science-led approach to identifying indicators

    Science.gov (United States)

    Mengis, Nadine; Keller, David P.; Oschlies, Andreas

    2018-01-01

    This study introduces the Systematic Correlation Matrix Evaluation (SCoMaE) method, a bottom-up approach which combines expert judgment and statistical information to systematically select transparent, nonredundant indicators for a comprehensive assessment of the state of the Earth system. The methods consists of two basic steps: (1) the calculation of a correlation matrix among variables relevant for a given research question and (2) the systematic evaluation of the matrix, to identify clusters of variables with similar behavior and respective mutually independent indicators. Optional further analysis steps include (3) the interpretation of the identified clusters, enabling a learning effect from the selection of indicators, (4) testing the robustness of identified clusters with respect to changes in forcing or boundary conditions, (5) enabling a comparative assessment of varying scenarios by constructing and evaluating a common correlation matrix, and (6) the inclusion of expert judgment, for example, to prescribe indicators, to allow for considerations other than statistical consistency. The example application of the SCoMaE method to Earth system model output forced by different CO2 emission scenarios reveals the necessity of reevaluating indicators identified in a historical scenario simulation for an accurate assessment of an intermediate-high, as well as a business-as-usual, climate change scenario simulation. This necessity arises from changes in prevailing correlations in the Earth system under varying climate forcing. For a comparative assessment of the three climate change scenarios, we construct and evaluate a common correlation matrix, in which we identify robust correlations between variables across the three considered scenarios.

  11. Truncation scheme of time-dependent density-matrix approach II

    Energy Technology Data Exchange (ETDEWEB)

    Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)

    2017-09-15

    A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)

  12. A unified approach to fixed-order controller design via linear matrix inequalities

    Directory of Open Access Journals (Sweden)

    Iwasaki T.

    1995-01-01

    Full Text Available We consider the design of fixed-order (or low-order linear controllers which meet certain performance and/or robustness specifications. The following three problems are considered; covariance control as a nominal performance problem, 𝒬 -stabilization as a robust stabilization problem, and robust L ∞ control problem as a robust performance problem. All three control problems are converted to a single linear algebra problem of solving a linear matrix inequality (LMI of the type B G C + ( B G C T + Q < 0 for the unknown matrix G . Thus this paper addresses the fixed-order controller design problem in a unified way. Necessary and sufficient conditions for the existence of a fixed-order controller which satisfies the design specifications for each problem are derived, and an explicit controller formula is given. In any case, the resulting problem is shown to be a search for a (structured positive definite matrix X such that X ∈ 𝒞 1 and X − 1 ∈ 𝒞 2 where 𝒞 1 and 𝒞 2 are convex sets defined by LMIs. Computational aspects of the nonconvex LMI problem are discussed.

  13. A unified approach to fixed-order controller design via linear matrix inequalities

    Directory of Open Access Journals (Sweden)

    T. Iwasaki

    1995-01-01

    Full Text Available We consider the design of fixed-order (or low-order linear controllers which meet certain performance and/or robustness specifications. The following three problems are considered; covariance control as a nominal performance problem,-stabilization as a robust stabilization problem, and robust L∞ control problem as a robust performance problem. All three control problems are converted to a single linear algebra problem of solving a linear matrix inequality (LMI of the type BGC+(BGCT+Q<0 for the unknown matrix G. Thus this paper addresses the fixed-order controller design problem in a unified way. Necessary and sufficient conditions for the existence of a fixed-order controller which satisfies the design specifications for each problem are derived, and an explicit controller formula is given. In any case, the resulting problem is shown to be a search for a (structured positive definite matrix X such that X∈1 and X−1∈2 where 1 and 2 are convex sets defined by LMIs. Computational aspects of the nonconvex LMI problem are discussed.

  14. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    Science.gov (United States)

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  15. Comprehensive proteome analysis of nasal lavage samples after controlled exposure to welding nanoparticles shows an induced acute phase and a nuclear receptor, LXR/RXR, activation that influence the status of the extracellular matrix.

    Science.gov (United States)

    Ali, Neserin; Ljunggren, Stefan; Karlsson, Helen M; Wierzbicka, Aneta; Pagels, Joakim; Isaxon, Christina; Gudmundsson, Anders; Rissler, Jenny; Nielsen, Jörn; Lindh, Christian H; Kåredal, Monica

    2018-01-01

    Epidemiological studies have shown that many welders experience respiratory symptoms. During the welding process a large number of airborne nanosized particles are generated, which might be inhaled and deposited in the respiratory tract. Knowledge of the underlying mechanisms behind observed symptoms is still partly lacking, although inflammation is suggested to play a central role. The aim of this study was to investigate the effects of welding fume particle exposure on the proteome expression level in welders suffering from respiratory symptoms, and changes in protein mediators in nasal lavage samples were analyzed. Such mediators will be helpful to clarify the pathomechanisms behind welding fume particle-induced effects. In an exposure chamber, 11 welders with work-related symptoms in the lower airways during the last month were exposed to mild-steel welding fume particles (1 mg/m 3 ) and to filtered air, respectively, in a double-blind manner. Nasal lavage samples were collected before, immediately after, and the day after exposure. The proteins in the nasal lavage were analyzed with two different mass spectrometry approaches, label-free discovery shotgun LC-MS/MS and a targeted selected reaction monitoring LC-MS/MS analyzing 130 proteins and four in vivo peptide degradation products. The analysis revealed 30 significantly changed proteins that were associated with two main pathways; activation of acute phase response signaling and activation of LXR/RXR, which is a nuclear receptor family involved in lipid signaling. Connective tissue proteins and proteins controlling the degradation of such tissues, including two different matrix metalloprotease proteins, MMP8 and MMP9, were among the significantly changed enzymes and were identified as important key players in the pathways. Exposure to mild-steel welding fume particles causes measurable changes on the proteome level in nasal lavage matrix in exposed welders, although no clinical symptoms were manifested. The

  16. Can the CFO Trust the FX Exposure Quantification from a Stock Market Approach?

    DEFF Research Database (Denmark)

    Aabo, Tom; Brodin, Danielle

    This study examines the sensitivity of detected exchange rate exposures at the firm specific level to changes in methodological choices using a traditional two factor stock market approach for exposure quantification. We primarily focus on two methodological choices: the choice of market index...... and the choice of observation frequency. We investigate to which extent the detected exchange rate exposures for a given firm can be confirmed when the choice of market index and/or the choice of observation frequency are changed. Applying our sensitivity analysis to Scandinavian non-financial firms, we...... thirds of the number of detected exposures using weekly data and 2) there is no economic rationale that the detected exposures at the firm-specific level should change when going from the use of weekly data to the use of monthly data. In relation to a change in the choice of market index, we find...

  17. Teaching the Extracellular Matrix and Introducing Online Databases within a Multidisciplinary Course with i-Cell-MATRIX: A Student-Centered Approach

    Science.gov (United States)

    Sousa, Joao Carlos; Costa, Manuel Joao; Palha, Joana Almeida

    2010-01-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure.…

  18. Comparison of approaches to deal with matrix effects in LC-MS/MS based determinations of mycotoxins in food and feed

    NARCIS (Netherlands)

    Fabregat-Cabello, N.; Zomer, P.; Sancho, J.V.; Roig-Navarro, A.F.; Mol, H.G.J.

    2016-01-01

    This study deals with one of the major concerns in mycotoxin determinations: The matrix effect related to LC-MS/ MS systems with electrospray ionization sources. To this end, in a first approach, the matrix effect has been evaluated in two ways: monitoring the signal of a compound (added to the

  19. Modelling of human exposure to air pollution in the urban environment: a GPS-based approach.

    Science.gov (United States)

    Dias, Daniela; Tchepel, Oxana

    2014-03-01

    The main objective of this work was the development of a new modelling tool for quantification of human exposure to traffic-related air pollution within distinct microenvironments by using a novel approach for trajectory analysis of the individuals. For this purpose, mobile phones with Global Positioning System technology have been used to collect daily trajectories of the individuals with higher temporal resolution and a trajectory data mining, and geo-spatial analysis algorithm was developed and implemented within a Geographical Information System to obtain time-activity patterns. These data were combined with air pollutant concentrations estimated for several microenvironments. In addition to outdoor, pollutant concentrations in distinct indoor microenvironments are characterised using a probabilistic approach. An example of the application for PM2.5 is presented and discussed. The results obtained for daily average individual exposure correspond to a mean value of 10.6 and 6.0-16.4 μg m(-3) in terms of 5th-95th percentiles. Analysis of the results shows that the use of point air quality measurements for exposure assessment will not explain the intra- and inter-variability of individuals' exposure levels. The methodology developed and implemented in this work provides time-sequence of the exposure events thus making possible association of the exposure with the individual activities and delivers main statistics on individual's air pollution exposure with high spatio-temporal resolution.

  20. Occupational exposure to radon progeny: Importance, experience with control, regulatory approaches

    International Nuclear Information System (INIS)

    Kraus, W.; Schwedt, J.

    2002-01-01

    An overview of possible occupational exposures to enhanced natural radiation in Germany is given, based on an analysis of the German Radiological Protection Commission. So far, the most significant exposure source is radon at underground and above ground workplaces. As a result of relevant regulations, in East Germany since the 70's a systematic monitoring of exposures to radon progeny has been introduced step by step in the uranium industry, in conventional ore mining, in show caves and mines, in enterprises for securing mining areas against subsidence, in radon spas and in water works in radon affected areas. Individual exposures have been assessed. The monitoring results for the period 1975-1998 are presented. Successful protection measures leading to a significant reduction of the exposures are discussed. using workplace monitoring results and registered occupancy times. In West Germany no regulations in this area were in force. Nevertheless, voluntary measuring programmes at similar workplaces were carried out. In case of unacceptable exposures successful protection measures were implemented. At present a systematic approach to control occupational exposures to radon is laid down in the European Directive 96/29/Euratom which has to be taken over in the national legislation to come. The expected number of workplaces to be included in the radiation protection system in Germany, the recommendable way of including different workplace types taking into account appropriate reference levels, and possible approaches to a graded system of workplace and individual monitoring are discussed in detail. (author)

  1. A new approach based on transfer matrix formalism to characterize porous silicon layers by reflectometry

    Energy Technology Data Exchange (ETDEWEB)

    Pirasteh, P. [RESO Laboratory (EA 3380), ENIB, CS 73862, 29238 Brest Cedex 3 (France); Optronics Laboratory, ENSSAT, UMR 6082, BP 80518, 6 rue de Kerampont, 22305 Lannion Cedex (France); Boucher, Y.G. [RESO Laboratory (EA 3380), ENIB, CS 73862, 29238 Brest Cedex 3 (France); Charrier, J.; Dumeige, Y. [Optronics Laboratory, ENSSAT, UMR 6082, BP 80518, 6 rue de Kerampont, 22305 Lannion Cedex (France)

    2007-07-01

    We use reflectometry coupled to transfer matrix formalism in order to investigate the comparative effect of surface (localized) and volume (distributed) losses inside a porous silicon monolayer. Both are modeled as fictive absorption. Surface losses are described as a Dirac-like singularity of permittivity localized at an interface whereas volume losses are described trough the imaginary part of the porous silicon complex permittivity. A good agreement with experimental data is determined by this formalism. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  2. Density matrix renormalization group for a highly degenerate quantum system: Sliding environment block approach

    Science.gov (United States)

    Schmitteckert, Peter

    2018-04-01

    We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.

  3. Developing a Health Information Technology Systems Matrix: A Qualitative Participatory Approach.

    Science.gov (United States)

    Haun, Jolie N; Chavez, Margeaux; Nazi, Kim M; Antinori, Nicole

    2016-10-06

    The US Department of Veterans Affairs (VA) has developed various health information technology (HIT) resources to provide accessible veteran-centered health care. Currently, the VA is undergoing a major reorganization of VA HIT to develop a fully integrated system to meet consumer needs. Although extensive system documentation exists for various VA HIT systems, a more centralized and integrated perspective with clear documentation is needed in order to support effective analysis, strategy, planning, and use. Such a tool would enable a novel view of what is currently available and support identifying and effectively capturing the consumer's vision for the future. The objective of this study was to develop the VA HIT Systems Matrix, a novel tool designed to describe the existing VA HIT system and identify consumers' vision for the future of an integrated VA HIT system. This study utilized an expert panel and veteran informant focus groups with self-administered surveys. The study employed participatory research methods to define the current system and understand how stakeholders and veterans envision the future of VA HIT and interface design (eg, look, feel, and function). Directed content analysis was used to analyze focus group data. The HIT Systems Matrix was developed with input from 47 veterans, an informal caregiver, and an expert panel to provide a descriptive inventory of existing and emerging VA HIT in four worksheets: (1) access and function, (2) benefits and barriers, (3) system preferences, and (4) tasks. Within each worksheet is a two-axis inventory. The VA's existing and emerging HIT platforms (eg, My HealtheVet, Mobile Health, VetLink Kiosks, Telehealth), My HealtheVet features (eg, Blue Button, secure messaging, appointment reminders, prescription refill, vet library, spotlight, vitals tracker), and non-VA platforms (eg, phone/mobile phone, texting, non-VA mobile apps, non-VA mobile electronic devices, non-VA websites) are organized by row. Columns

  4. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

    Science.gov (United States)

    Olekhno, N. A.; Beltukov, Y. M.

    2018-05-01

    Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

  5. Developing a Health Information Technology Systems Matrix: A Qualitative Participatory Approach

    Science.gov (United States)

    Chavez, Margeaux; Nazi, Kim M; Antinori, Nicole

    2016-01-01

    Background The US Department of Veterans Affairs (VA) has developed various health information technology (HIT) resources to provide accessible veteran-centered health care. Currently, the VA is undergoing a major reorganization of VA HIT to develop a fully integrated system to meet consumer needs. Although extensive system documentation exists for various VA HIT systems, a more centralized and integrated perspective with clear documentation is needed in order to support effective analysis, strategy, planning, and use. Such a tool would enable a novel view of what is currently available and support identifying and effectively capturing the consumer’s vision for the future. Objective The objective of this study was to develop the VA HIT Systems Matrix, a novel tool designed to describe the existing VA HIT system and identify consumers’ vision for the future of an integrated VA HIT system. Methods This study utilized an expert panel and veteran informant focus groups with self-administered surveys. The study employed participatory research methods to define the current system and understand how stakeholders and veterans envision the future of VA HIT and interface design (eg, look, feel, and function). Directed content analysis was used to analyze focus group data. Results The HIT Systems Matrix was developed with input from 47 veterans, an informal caregiver, and an expert panel to provide a descriptive inventory of existing and emerging VA HIT in four worksheets: (1) access and function, (2) benefits and barriers, (3) system preferences, and (4) tasks. Within each worksheet is a two-axis inventory. The VA’s existing and emerging HIT platforms (eg, My HealtheVet, Mobile Health, VetLink Kiosks, Telehealth), My HealtheVet features (eg, Blue Button, secure messaging, appointment reminders, prescription refill, vet library, spotlight, vitals tracker), and non-VA platforms (eg, phone/mobile phone, texting, non-VA mobile apps, non-VA mobile electronic devices, non

  6. Time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, Oleg; Bartschat, Klaus [Department of Physics and Astronomy, Drake University, Des Moines, Iowa 50311 (United States); Noble, Clifford J [Computational Science and Engineering Department, Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Schneider, Barry I, E-mail: xiaoxu.guan@drake.ed, E-mail: klaus.bartschat@drake.ed, E-mail: bschneid@nsf.go [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)

    2009-11-01

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Example results for momentum and energy distributions of the two outgoing electrons in two-color pump-probe processes of He are presented.

  7. A time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, O; Noble, C J; Bartschat, K [Department of Physics and Astronomy, Drake University, Des Moines, IA 50311 (United States); Schneider, B I [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)], E-mail: xiaoxu.guan@drake.edu, E-mail: oleg.zatsarinny@drake.edu, E-mail: cjn@maxnet.co.nz, E-mail: klaus.bartschat@drake.edu, E-mail: bschneid@nsf.gov

    2009-07-14

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Test calculations for double ionization of He by a single laser pulse yield good agreement with benchmark results obtained with other methods. The method is then applied to two-colour pump-probe processes, for which momentum and energy distributions of the two outgoing electrons are presented.

  8. Dynamical simulation of electron transfer processes in self-assembled monolayers at metal surfaces using a density matrix approach

    Science.gov (United States)

    Prucker, V.; Bockstedte, M.; Thoss, M.; Coto, P. B.

    2018-03-01

    A single-particle density matrix approach is introduced to simulate the dynamics of heterogeneous electron transfer (ET) processes at interfaces. The characterization of the systems is based on a model Hamiltonian parametrized by electronic structure calculations and a partitioning method. The method is applied to investigate ET in a series of nitrile-substituted (poly)(p-phenylene)thiolate self-assembled monolayers adsorbed at the Au(111) surface. The results show a significant dependence of the ET on the orbital symmetry of the donor state and on the molecular and electronic structure of the spacer.

  9. Dynamical simulation of electron transfer processes in self-assembled monolayers at metal surfaces using a density matrix approach.

    Science.gov (United States)

    Prucker, V; Bockstedte, M; Thoss, M; Coto, P B

    2018-03-28

    A single-particle density matrix approach is introduced to simulate the dynamics of heterogeneous electron transfer (ET) processes at interfaces. The characterization of the systems is based on a model Hamiltonian parametrized by electronic structure calculations and a partitioning method. The method is applied to investigate ET in a series of nitrile-substituted (poly)(p-phenylene)thiolate self-assembled monolayers adsorbed at the Au(111) surface. The results show a significant dependence of the ET on the orbital symmetry of the donor state and on the molecular and electronic structure of the spacer.

  10. Interior and exterior resonances in acoustic scattering. pt. 2 - Targets of arbitrary shape (T-matrix approach)

    International Nuclear Information System (INIS)

    Uberall, H.; Gaunaurd, G.C.; Tanglis, E.

    1983-01-01

    The T-matrix approach, which describes the scattering of acoustic waves (or of other waves) from objects of arbitrary shape and geometry, is here 'married' to the resonance scattering theory in order to obtain the (complex) resonance frequencies of an arbitrary shaped target. For the case of nearly impenetrable targets the partial-wave scattering amplitudes are splitted into terms corresponding to 'internal' resonances, plus an apparently nonresonant background amplitude which, however, contains the broad resonances caused by 'external' diffracted (or Franz-type, creeping) waves, in addition to geometrically reflected and refracted (ray) contributions

  11. H-/H∞ structural damage detection filter design using an iterative linear matrix inequality approach

    International Nuclear Information System (INIS)

    Chen, B; Nagarajaiah, S

    2008-01-01

    The existence of damage in different members of a structure can be posed as a fault detection problem. It is also necessary to isolate structural members in which damage exists, which can be posed as a fault isolation problem. It is also important to detect the time instants of occurrence of the faults/damage. The structural damage detection filter developed in this paper is a model-based fault detection and isolation (FDI) observer suitable for detecting and isolating structural damage. In systems, possible faults, disturbances and noise are coupled together. When system disturbances and sensor noise cannot be decoupled from faults/damage, the detection filter needs to be designed to be robust to disturbances as well as sensitive to faults/damage. In this paper, a new H - /H ∞ and iterative linear matrix inequality (LMI) technique is developed and a new stabilizing FDI filter is proposed, which bounds the H ∞ norm of the transfer function from disturbances to the output residual and simultaneously does not degrade the component of the output residual due to damage. The reduced-order error dynamic system is adopted to form bilinear matrix inequalities (BMIs), then an iterative LMI algorithm is developed to solve the BMIs. The numerical example and experimental verification demonstrate that the proposed algorithm can successfully detect and isolate structural damage in the presence of measurement noise

  12. Effect of reinforcement on the cutting forces while machining metal matrix composites–An experimental approach

    Directory of Open Access Journals (Sweden)

    Ch. Shoba

    2015-12-01

    Full Text Available Hybrid metal matrix composites are of great interest for researchers in recent years, because of their attractive superior properties over traditional materials and single reinforced composites. The machinabilty of hybrid composites becomes vital for manufacturing industries. The need to study the influence of process parameters on the cutting forces in turning such hybrid composite under dry environment is essentially required. In the present study, the influence of machining parameters, e.g. cutting speed, feed and depth of cut on the cutting force components, namely feed force (Ff, cutting force (Fc, and radial force (Fd has been investigated. Investigations were performed on 0, 2, 4, 6 and 8 wt% Silicon carbide (SiC and rice husk ash (RHA reinforced composite specimens. A comparison was made between the reinforced and unreinforced composites. The results proved that all the cutting force components decrease with the increase in the weight percentage of the reinforcement: this was probably due to the dislocation densities generated from the thermal mismatch between the reinforcement and the matrix. Experimental evidence also showed that built-up edge (BUE is formed during machining of low percentage reinforced composites at high speed and high depth of cut. The formation of BUE was captured by SEM, therefore confirming the result. The decrease of cutting force components with lower cutting speed and higher feed and depth of cut was also highlighted. The related mechanisms are explained and presented.

  13. Current state of knowledge when it comes to consumer exposure to nanomaterial embedded in a solid matrix

    DEFF Research Database (Denmark)

    Mackevica, Aiga; Hansen, Steffen Foss

    2015-01-01

    form. For studies that report enough information, we developed potential exposure scenarios and derived exposure estimates according to REACH R.16 using the Tier 1 equations for consumer exposure estimation and Tier 1 tools i.e. ECETOX TRA and Consexpo. In general, we find that the information and data......Little is known about consumer exposure to engineered nanomaterials (ENMs) stemming from NM-containing consumer products. Here, we focus especially on studies that have investigated the release of ENMs from consumer products, investigating to what extent the information in the open literature can...... be used to fulfill the requirements outlined in the European chemical legislation, REACH. In total, we have identified about 75 publications of relevance and the number of publications is increasing every year. The most studied materials include silver and titanium dioxide NPs, CNTs and SiO2. If reported...

  14. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  15. An enhanced matrix-free edge-based finite volume approach to model structures

    CSIR Research Space (South Africa)

    Suliman, Ridhwaan

    2010-01-01

    Full Text Available application to a number of test-cases. As will be demonstrated, the finite volume approach exhibits distinct advantages over the Q4 finite element formulation. This provides an alternative approach to the analysis of solid mechanics and allows...

  16. An NDE Approach for Characterizing Quality Problems in Polymer Matrix Composites

    Science.gov (United States)

    Roth, Don J.; Baaklini, George Y.; Sutter, James K.; Bodis, James R.; Leonhardt, Todd A.; Crane, Elizabeth A.

    1994-01-01

    Polymer matrix composite (PMC) materials are periodically identified appearing optically uniform but containing a higher than normal level of global nonuniformity as indicated from preliminary ultrasonic scanning. One such panel was thoroughly examined by nondestructive (NDE) and destructive methods to quantitatively characterize the nonuniformity. The NDE analysis of the panel was complicated by the fact that the panel was not uniformly thick. Mapping of ultrasonic velocity across a region of the panel in conjunction with an error analysis was necessary to (1) characterize properly the porosity gradient that was discovered during destructive analyses and (2) account for the thickness variation effects. Based on this study, a plan for future NDE characterization of PMC's is presented to the PMC community.

  17. Transfer matrix approach for the Kerr and Faraday rotation in layered nanostructures.

    Science.gov (United States)

    Széchenyi, Gábor; Vigh, Máté; Kormányos, Andor; Cserti, József

    2016-09-21

    To study the optical rotation of the polarization of light incident on multilayer systems consisting of atomically thin conductors and dielectric multilayers we present a general method based on transfer matrices. The transfer matrix of the atomically thin conducting layer is obtained using the Maxwell equations. We derive expressions for the Kerr (Faraday) rotation angle and for the ellipticity of the reflected (transmitted) light as a function of the incident angle and polarization of the light. The method is demonstrated by calculating the Kerr (Faraday) angle for bilayer graphene in the quantum anomalous Hall state placed on the top of dielectric multilayers. The optical conductivity of the bilayer graphene is calculated in the framework of a four-band model.

  18. Transfer matrix approach for the Kerr and Faraday rotation in layered nanostructures

    International Nuclear Information System (INIS)

    Széchenyi, Gábor; Vigh, Máté; Cserti, József; Kormányos, Andor

    2016-01-01

    To study the optical rotation of the polarization of light incident on multilayer systems consisting of atomically thin conductors and dielectric multilayers we present a general method based on transfer matrices. The transfer matrix of the atomically thin conducting layer is obtained using the Maxwell equations. We derive expressions for the Kerr (Faraday) rotation angle and for the ellipticity of the reflected (transmitted) light as a function of the incident angle and polarization of the light. The method is demonstrated by calculating the Kerr (Faraday) angle for bilayer graphene in the quantum anomalous Hall state placed on the top of dielectric multilayers. The optical conductivity of the bilayer graphene is calculated in the framework of a four-band model. (paper)

  19. The Public Health Exposome: A Population-Based, Exposure Science Approach to Health Disparities Research

    Science.gov (United States)

    Juarez, Paul D.; Matthews-Juarez, Patricia; Hood, Darryl B.; Im, Wansoo; Levine, Robert S.; Kilbourne, Barbara J.; Langston, Michael A.; Al-Hamdan, Mohammad Z.; Crosson, William L.; Estes, Maurice G.; Estes, Sue M.; Agboto, Vincent K.; Robinson, Paul; Wilson, Sacoby; Lichtveld, Maureen Y.

    2014-01-01

    The lack of progress in reducing health disparities suggests that new approaches are needed if we are to achieve meaningful, equitable, and lasting reductions. Current scientific paradigms do not adequately capture the complexity of the relationships between environment, personal health and population level disparities. The public health exposome is presented as a universal exposure tracking framework for integrating complex relationships between exogenous and endogenous exposures across the lifespan from conception to death. It uses a social-ecological framework that builds on the exposome paradigm for conceptualizing how exogenous exposures “get under the skin”. The public health exposome approach has led our team to develop a taxonomy and bioinformatics infrastructure to integrate health outcomes data with thousands of sources of exogenous exposure, organized in four broad domains: natural, built, social, and policy environments. With the input of a transdisciplinary team, we have borrowed and applied the methods, tools and terms from various disciplines to measure the effects of environmental exposures on personal and population health outcomes and disparities, many of which may not manifest until many years later. As is customary with a paradigm shift, this approach has far reaching implications for research methods and design, analytics, community engagement strategies, and research training. PMID:25514145

  20. The Public Health Exposome: A Population-Based, Exposure Science Approach to Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Paul D. Juarez

    2014-12-01

    Full Text Available The lack of progress in reducing health disparities suggests that new approaches are needed if we are to achieve meaningful, equitable, and lasting reductions. Current scientific paradigms do not adequately capture the complexity of the relationships between environment, personal health and population level disparities. The public health exposome is presented as a universal exposure tracking framework for integrating complex relationships between exogenous and endogenous exposures across the lifespan from conception to death. It uses a social-ecological framework that builds on the exposome paradigm for conceptualizing how exogenous exposures “get under the skin”. The public health exposome approach has led our team to develop a taxonomy and bioinformatics infrastructure to integrate health outcomes data with thousands of sources of exogenous exposure, organized in four broad domains: natural, built, social, and policy environments. With the input of a transdisciplinary team, we have borrowed and applied the methods, tools and terms from various disciplines to measure the effects of environmental exposures on personal and population health outcomes and disparities, many of which may not manifest until many years later. As is customary with a paradigm shift, this approach has far reaching implications for research methods and design, analytics, community engagement strategies, and research training.

  1. Chemical Exposure Assessment Program at Los Alamos National Laboratory: A risk based approach

    International Nuclear Information System (INIS)

    Stephenson, D.J.

    1996-01-01

    The University of California Contract And DOE Order 5480.10 require that Los Alamos National Laboratory (LANL) perform health hazard assessments/inventories of all employee workplaces. In response to this LANL has developed the Chemical Exposure Assessment Program. This program provides a systematic risk-based approach to anticipation, recognition, evaluation and control of chemical workplace exposures. Program implementation focuses resources on exposures with the highest risks for causing adverse health effects. Implementation guidance includes procedures for basic characterization, qualitative risk assessment, quantitative validation, and recommendations and reevaluation. Each component of the program is described. It is shown how a systematic method of assessment improves documentation, retrieval, and use of generated exposure information

  2. Task to Training Matrix Design for Decommissioning Engineer on the basis of Systematic Approach to Training Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Jeong Keun [KHNP, Ulsan (Korea, Republic of)

    2016-10-15

    In nuclear history, before Chernobyl Accident, Three Mile Island (TMI) Accident was the severest accident. For this reason, to resolve the disclosed or potential possibilities of nuclear accident, more than one hundred countermeasures were proposed by United States Nuclear Regulatory Commission (USNRC). Among various recommendations by USNRC, one suggestion was related to training aspect. It was Systematic Approach to Training (SAT) and this event was the initiation of SAT methodology in the world. In Korea, upcoming June 2017, Kori Unit-1 NPP is scheduled to be shut down and it will experience NPP decommissioning for the first time. Present study aims to establish concrete training foundation for NPP decommissioning engineers based on Systematic Approach to Training (SAT) methodology, in particular, Task to Training Matrix (TTM). The objective of this paper is to organize TTM on the basis of SAT for NPP decommissioning engineer. For this reason, eighteen tasks are yielded through Job and Task Analysis (JTA) process. After that, for the settlement of Task to Training Matrix (TTM), various data are determined such as element, condition, standard, knowledge and skill, learning objective and training setting. When it comes to training in nuclear industry, SAT methodology has been the unwavering principle in Korea since NPPs export to UAE.

  3. Authorship matrix: a rational approach to quantify individual contributions and responsibilities in multi-author scientific articles.

    Science.gov (United States)

    Clement, T Prabhakar

    2014-06-01

    We propose a rational method for addressing an important question-who deserves to be an author of a scientific article? We review various contentious issues associated with this question and recommend that the scientific community should view authorship in terms of contributions and responsibilities, rather than credits. We propose a new paradigm that conceptually divides a scientific article into four basic elements: ideas, work, writing, and stewardship. We employ these four fundamental elements to modify the well-known International Committee of Medical Journal Editors (ICMJE) authorship guidelines. The modified ICMJE guidelines are then used as the basis to develop an approach to quantify individual contributions and responsibilities in multi-author articles. The outcome of the approach is an authorship matrix, which can be used to answer several nagging questions related to authorship.

  4. Comparison of the iterated equation of motion approach and the density matrix formalism for the quantum Rabi model

    Science.gov (United States)

    Kalthoff, Mona; Keim, Frederik; Krull, Holger; Uhrig, Götz S.

    2017-05-01

    The density matrix formalism and the equation of motion approach are two semi-analytical methods that can be used to compute the non-equilibrium dynamics of correlated systems. While for a bilinear Hamiltonian both formalisms yield the exact result, for any non-bilinear Hamiltonian a truncation is necessary. Due to the fact that the commonly used truncation schemes differ for these two methods, the accuracy of the obtained results depends significantly on the chosen approach. In this paper, both formalisms are applied to the quantum Rabi model. This allows us to compare the approximate results and the exact dynamics of the system and enables us to discuss the accuracy of the approximations as well as the advantages and the disadvantages of both methods. It is shown to which extent the results fulfill physical requirements for the observables and which properties of the methods lead to unphysical results.

  5. A geographic approach to modelling human exposure to traffic air pollution using GIS. Separate appendix report

    Energy Technology Data Exchange (ETDEWEB)

    Solvang Jensen, S.

    1998-10-01

    A new exposure model has been developed that is based on a physical, single media (air) and single source (traffic) micro environmental approach that estimates traffic related exposures geographically with the postal address as exposure indicator. The micro environments: residence, workplace and street (road user exposure) may be considered. The model estimates outdoor levels for selected ambient air pollutants (benzene, CO, NO{sub 2} and O{sub 3}). The influence of outdoor air pollution on indoor levels can be estimated using average (I/O-ratios. The model has a very high spatial resolution (the address), a high temporal resolution (one hour) and may be used to predict past, present and future exposures. The model may be used for impact assessment of control measures provided that the changes to the model inputs are obtained. The exposure model takes advantage of a standard Geographic Information System (GIS) (ArcView and Avenue) for generation of inputs, for visualisation of input and output, and uses available digital maps, national administrative registers and a local traffic database, and the Danish Operational Street Pollution Model (OSPM). The exposure model presents a new approach to exposure determination by integration of digital maps, administrative registers, a street pollution model and GIS. New methods have been developed to generate the required input parameters for the OSPM model: to geocode buildings using cadastral maps and address points, to automatically generate street configuration data based on digital maps, the BBR and GIS; to predict the temporal variation in traffic and related parameters; and to provide hourly background levels for the OSPM model. (EG)

  6. A geographic approach to modelling human exposure to traffic air pollution using GIS

    Energy Technology Data Exchange (ETDEWEB)

    Solvang Jensen, S.

    1998-10-01

    A new exposure model has been developed that is based on a physical, single media (air) and single source (traffic) micro environmental approach that estimates traffic related exposures geographically with the postal address as exposure indicator. The micro environments: residence, workplace and street (road user exposure) may be considered. The model estimates outdoor levels for selected ambient air pollutants (benzene, CO, NO{sub 2} and O{sub 3}). The influence of outdoor air pollution on indoor levels can be estimated using average (I/O-ratios. The model has a very high spatial resolution (the address), a high temporal resolution (one hour) and may be used to predict past, present and future exposures. The model may be used for impact assessment of control measures provided that the changes to the model inputs are obtained. The exposure model takes advantage of a standard Geographic Information System (GIS) (ArcView and Avenue) for generation of inputs, for visualisation of input and output, and uses available digital maps, national administrative registers and a local traffic database, and the Danish Operational Street Pollution Model (OSPM). The exposure model presents a new approach to exposure determination by integration of digital maps, administrative registers, a street pollution model and GIS. New methods have been developed to generate the required input parameters for the OSPM model: to geocode buildings using cadastral maps and address points, to automatically generate street configuration data based on digital maps, the BBR and GIS; to predict the temporal variation in traffic and related parameters; and to provide hourly background levels for the OSPM model. (EG) 109 refs.

  7. Aberrant approach-avoidance conflict resolution following repeated cocaine pre-exposure.

    Science.gov (United States)

    Nguyen, David; Schumacher, Anett; Erb, Suzanne; Ito, Rutsuko

    2015-10-01

    Addiction is characterized by persistence to seek drug reinforcement despite negative consequences. Drug-induced aberrations in approach and avoidance processing likely facilitate the sustenance of addiction pathology. Currently, the effects of repeated drug exposure on the resolution of conflicting approach and avoidance motivational signals have yet to be thoroughly investigated. The present study sought to investigate the effects of cocaine pre-exposure on conflict resolution using novel approach-avoidance paradigms. We used a novel mixed-valence conditioning paradigm to condition cocaine-pre-exposed rats to associate visuo-tactile cues with either the delivery of sucrose reward or shock punishment in the arms in which the cues were presented. Following training, exploration of an arm containing a superimposition of the cues was assessed as a measure of conflict resolution behavior. We also used a mixed-valence runway paradigm wherein cocaine-pre-exposed rats traversed an alleyway toward a goal compartment to receive a pairing of sucrose reward and shock punishment. Latency to enter the goal compartment across trials was taken as a measure of motivational conflict. Our results reveal that cocaine pre-exposure attenuated learning for the aversive cue association in our conditioning paradigm and enhanced preference for mixed-valence stimuli in both paradigms. Repeated cocaine pre-exposure allows appetitive approach motivations to gain greater influence over behavioral output in the context of motivational conflict, due to aberrant positive and negative incentive motivational processing.

  8. A tiered asthma hazard characterization and exposure assessment approach for evaluation of consumer product ingredients.

    Science.gov (United States)

    Maier, Andrew; Vincent, Melissa J; Parker, Ann; Gadagbui, Bernard K; Jayjock, Michael

    2015-12-01

    Asthma is a complex syndrome with significant consequences for those affected. The number of individuals affected is growing, although the reasons for the increase are uncertain. Ensuring the effective management of potential exposures follows from substantial evidence that exposure to some chemicals can increase the likelihood of asthma responses. We have developed a safety assessment approach tailored to the screening of asthma risks from residential consumer product ingredients as a proactive risk management tool. Several key features of the proposed approach advance the assessment resources often used for asthma issues. First, a quantitative health benchmark for asthma or related endpoints (irritation and sensitization) is provided that extends qualitative hazard classification methods. Second, a parallel structure is employed to include dose-response methods for asthma endpoints and methods for scenario specific exposure estimation. The two parallel tracks are integrated in a risk characterization step. Third, a tiered assessment structure is provided to accommodate different amounts of data for both the dose-response assessment (i.e., use of existing benchmarks, hazard banding, or the threshold of toxicological concern) and exposure estimation (i.e., use of empirical data, model estimates, or exposure categories). Tools building from traditional methods and resources have been adapted to address specific issues pertinent to asthma toxicology (e.g., mode-of-action and dose-response features) and the nature of residential consumer product use scenarios (e.g., product use patterns and exposure durations). A case study for acetic acid as used in various sentinel products and residential cleaning scenarios was developed to test the safety assessment methodology. In particular, the results were used to refine and verify relationships among tiered approaches such that each lower data tier in the approach provides a similar or greater margin of safety for a given

  9. Long-term dietary exposure to lead in young European children: Comparing a pan-European approach with a national exposure assessment

    DEFF Research Database (Denmark)

    Boon, P.E.; Te Biesebeek, J.D.; van Klaveren, J.D.

    2012-01-01

    Long-term dietary exposures to lead in young children were calculated by combining food consumption data of 11 European countries categorised using harmonised broad food categories with occurrence data on lead from different Member States (pan-European approach). The results of the assessment...... in children living in the Netherlands were compared with a long-term lead intake assessment in the same group using Dutch lead concentration data and linking the consumption and concentration data at the highest possible level of detail. Exposures obtained with the pan-European approach were higher than...... the national exposure calculations. For both assessments cereals contributed most to the exposure. The lower dietary exposure in the national study was due to the use of lower lead concentrations and a more optimal linkage of food consumption and concentration data. When a pan-European approach, using...

  10. Density matrix-based time-dependent configuration interaction approach to ultrafast spin-flip dynamics

    Science.gov (United States)

    Wang, Huihui; Bokarev, Sergey I.; Aziz, Saadullah G.; Kühn, Oliver

    2017-08-01

    Recent developments in attosecond spectroscopy yield access to the correlated motion of electrons on their intrinsic timescales. Spin-flip dynamics is usually considered in the context of valence electronic states, where spin-orbit coupling is weak and processes related to the electron spin are usually driven by nuclear motion. However, for core-excited states, where the core-hole has a nonzero angular momentum, spin-orbit coupling is strong enough to drive spin-flips on a much shorter timescale. Using density matrix-based time-dependent restricted active space configuration interaction including spin-orbit coupling, we address an unprecedentedly short spin-crossover for the example of L-edge (2p→3d) excited states of a prototypical Fe(II) complex. This process occurs on a timescale, which is faster than that of Auger decay (∼4 fs) treated here explicitly. Modest variations of carrier frequency and pulse duration can lead to substantial changes in the spin-state yield, suggesting its control by soft X-ray light.

  11. Performance of hybrid nano-micro reinforced mg metal matrix composites brake calliper: simulation approach

    Science.gov (United States)

    Fatchurrohman, N.; Chia, S. T.

    2017-10-01

    Most commercial vehicles use brake calliper made of grey cast iron (GCI) which possesses heavy weight. This contributes to the total weight of the vehicle which can lead to higher fuel consumption. Another major problem is GCI calliper tends to deflect during clamping action, known as “bending of bridge”. This will result in extended pedal travel. Magnesium metal matrix composites (Mg-MMC) has a potential application in the automotive industry since it having a lower density, higher strength and very good modulus of elasticity as compared to GCI. This paper proposed initial development of hybrid Mg-MMC brake calliper. This was achieved by analyzing the performance of hybrid nano-micro reinforced Mg-MMC and comparing with the conventional GCI brake calliper. It was performed using simulation in ANSYS, a finite element analysis (FEA) software. The results show that hybrid Mg-MMC has better performance in terms of reduction the weight of the brake calliper, reduction in total deformation/deflection and better ability to withstand equivalent elastic strain.

  12. SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook

    2012-01-01

    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  13. An introduction to the indirect exposure assessment approach: modeling human exposure using microenvironmental measurements and the recent National Human Activity Pattern Survey.

    Science.gov (United States)

    Klepeis, N E

    1999-01-01

    Indirect exposure approaches offer a feasible and accurate method for estimating population exposures to indoor pollutants, including environmental tobacco smoke (ETS). In an effort to make the indirect exposure assessment approach more accessible to people in the health and risk assessment fields, this paper provides examples using real data from (italic>a(/italic>) a week-long personal carbon monoxide monitoring survey conducted by the author; and (italic>b(/italic>) the 1992 to 1994 National Human Activity Pattern Survey (NHAPS) for the United States. The indirect approach uses measurements of exposures in specific microenvironments (e.g., homes, bars, offices), validated microenvironmental models (based on the mass balance equation), and human activity pattern data obtained from questionnaires to predict frequency distributions of exposure for entire populations. This approach requires fewer resources than the direct approach to exposure assessment, for which the distribution of monitors to a representative sample of a given population is necessary. In the indirect exposure assessment approach, average microenvironmental concentrations are multiplied by the total time spent in each microenvironment to give total integrated exposure. By assuming that the concentrations encountered in each of 10 location categories are the same for different members of the U.S. population (i.e., the NHAPS respondents), the hypothetical contribution that ETS makes to the average 24-hr respirable suspended particle exposure for Americans working their main job is calculated in this paper to be 18 microg/m3. This article is an illustrative review and does not contain an actual exposure assessment or model validation. Images Figure 3 Figure 4 PMID:10350522

  14. Patient radiation exposure in right versus left trans-radial approach for coronary procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rigattieri, Stefano; Di Russo, Cristian; Cera, Maria; Fedele, Silvio; Sciahbasi, Alessandro [Interventional Cardiology Unit, Sandro Pertini Hospital, Rome (Italy); Pugliese, Francesco Rocco [Emergency Department Sandro Pertini Hospital, Rome (Italy)

    2015-01-15

    Objectives: The aim of this study was to compare radiation exposure, assessed by dose-area product (DAP), in right trans-radial approach (RR) versus left trans-radial approach (LR) for coronary procedures. Background: In LR the catheter course is more similar to trans-femoral approach, thus allowing an easier negotiation of coronary ostia which, in turn, might translate into reduced fluoroscopy time (FT) and radiation exposure as compared to RR. Methods: We retrospectively selected diagnostic and interventional procedures (PCI) performed by RR or LR at our center from May 2009 to May 2014. We only included in the analysis the procedures in which DAP values were available. Results: We analyzed 1464 procedures, 1175 of which performed by RR (80.3%) and 289 by LR (19.7%). Median DAP values were significantly higher in RR as compared to LR for diagnostic and interventional procedures (4482 vs. 3540 cGy.cm{sup 2} and 11523 vs. 10086 cGy.cm{sup 2}, respectively; p < 0.05). No significant differences were observed in FT and in contrast volume (CV). In the propensity-matched cohort, consisting of 269 procedures for each group, no significant differences between LR and RR were observed in median DAP values for both diagnostic and interventional procedures (3990 vs. 3542 cGy.cm{sup 2} and 9964 vs. 10216 cGy.cm{sup 2}, respectively; p = ns); FT and CV were also similar. At multiple linear regression analysis laterality of trans-radial approach was not associated with DAP. Conclusions: In an experienced trans-radial center LR is not associated with a reduction in radiation exposure, FT or CV as compared to RR. - Highlights: • Right trans-radial approach is by far more commonly used than left trans-radial approach. • Left trans-radial approach has the advantage of an easier catheter manipulation, more similar to trans-femoral approach. • This could reduce fluoroscopy time and radiation exposure. • We conducted a retrospective study to investigate patient radiation

  15. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  16. Evaluation of 2,4-dichlorophenol exposure of Japanese medaka, Oryzias latipes, using a metabolomics approach.

    Science.gov (United States)

    Kokushi, Emiko; Shintoyo, Aoi; Koyama, Jiro; Uno, Seiichi

    2017-12-01

    In this study, the metabolic effects of waterborne exposure of medaka (Oryzias latipes) to nominal concentrations of 20 (L group) and 2000 μg/L (H group) 2,4-dichlorophenol (DCP) were examined using a gas chromatography/mass spectroscopy (GC/MS) metabolomics approach. A principal component analysis (PCA) separated the L, H, and control groups along PC1 to explain the toxic effects of DCP at 24 h of exposure. Furthermore, the L and H groups were separated along PC1 at 96 h on the PCA score plots. These results suggest that the effects of DCP depended on exposure concentration and time. Changes in tricarboxylic cycle metabolites suggested that fish exposed to 2,4-DCP require more energy to metabolize and eliminate DCP, particularly at 96 h of exposure. A time-dependent response in the fish exposed to DCP was observed in the GC/MS data, suggesting that the higher DCP concentration had greater effects at 24 h than those observed in response to the lower concentration. In addition, several essential amino acids (arginine, histidine, lysine, isoleucine, leucine, methionine, phenylalanine, threonine, tryptophan, and valine) decreased after DCP exposure in the H group, and starvation condition and high concentration exposure of DCP could consume excess energy from amino acids.

  17. A Systematic Approach for Obtaining Performance on Matrix-Like Operations

    Science.gov (United States)

    Veras, Richard Michael

    Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.

  18. Hybrid Air Quality Modeling Approach for use in the Hear-road Exposures to Urban air pollutant Study(NEXUS)

    Science.gov (United States)

    The paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associa...

  19. Violence in context: Embracing an ecological approach to violent media exposure.

    Science.gov (United States)

    Glackin, Erin; Gray, Sarah A O

    2016-12-01

    This commentary expands on Anderson, Bushman, Donnerstein, Hummer, and Warburton's agenda for minimizing the impacts of violent media exposure (VME) on youth aggression. We argue that in order to effectively intervene in the development of aggression and other maladaptive traits, researchers and policymakers should take an ecological, developmental psychopathology approach to understanding children's exposure to VME within developmental, relational, environmental, and cultural contexts. Such a framework holds the most promise for identifying at-risk groups, establishing targets of intervention, and testing mechanisms of change.

  20. A Quantitative Exposure Planning Tool for Surgical Approaches to the Sacroiliac Joint.

    Science.gov (United States)

    Phelps, Kevin D; Ming, Bryan W; Fox, Wade E; Bellamy, Nelly; Sims, Stephen H; Karunakar, Madhav A; Hsu, Joseph R

    2016-06-01

    To aid in surgical planning by quantifying and comparing the osseous exposure between the anterior and posterior approaches to the sacroiliac joint. Anterior and posterior approaches were performed on 12 sacroiliac joints in 6 fresh-frozen torsos. Visual and palpable access to relevant surgical landmarks was recorded. Calibrated digital photographs were taken of each approach and analyzed using Image J. The average surface areas of exposed bone were 44 and 33 cm for the anterior and posterior approaches, respectively. The anterior iliolumbar ligament footprint could be visualized in all anterior approaches, whereas the posterior aspect could be visualized in all but one posterior approach. The anterior approach provided visual and palpable access to the anterior superior edge of the sacroiliac joint in all specimens, the posterior superior edge in 75% of specimens, and the inferior margin in 25% and 50% of specimens, respectively. The inferior sacroiliac joint was easily visualized and palpated in all posterior approaches, although access to the anterior and posterior superior edges was more limited. The anterior S1 neuroforamen was not visualized with either approach and was more consistently palpated when going posterior (33% vs. 92%). Both anterior and posterior approaches can be used for open reduction of pure sacroiliac dislocations, each with specific areas for assessing reduction. In light of current plate dimensions, fractures more than 2.5 cm lateral to the anterior iliolumbar ligament footprint are amenable to anterior plate fixation, whereas those more medial may be better addressed through a posterior approach.

  1. Structural differences of matrix metalloproteinases with potential implications for inhibitor selectivity examined by the GRID/CPCA approach

    DEFF Research Database (Denmark)

    Terp, Gitte Elgaard; Cruciani, Gabriele; Christensen, Inge Thøger

    2002-01-01

    The matrix metalloproteinases (MMPs) are a family of proteolytic enzymes, which have been the focus of a lot of research in recent years because of their involvement in various disease conditions. In this study, structures of 10 enzymes (MMP1, MMP2, MMP3, MMP7, MMP8, MMP9, MMP12, MMP13, MMP14......, and MMP20) were examined with the intention of highlighting regions that could be potential sites for obtaining selectivity. For this purpose, the GRID/CPCA approach as implemented in GOLPE was used. Counterions were included to take into account the different electrostatic properties of the proteins......, and the GRID calculations were performed, allowing the protein side chains to move in response to interaction with the probes. In the search for selectivity, the MMPs are known to be a very difficult case because the enzymes of this family are very similar. The well-known differences in the S1' pocket were...

  2. Polarization observables in the longitudinal basis for pseudo-scalar meson photoproduction using a density matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Biplab Dey, Michael E. McCracken, David G. Ireland, Curtis A. Meyer

    2011-05-01

    The complete expression for the intensity in pseudo-scalar meson photoproduction with a polarized beam, target, and recoil baryon is derived using a density matrix approach that offers great economy of notation. A Cartesian basis with spins for all particles quantized along a single direction, the longitudinal beam direction, is used for consistency and clarity in interpretation. A single spin-quantization axis for all particles enables the amplitudes to be written in a manifestly covariant fashion with simple relations to those of the well-known CGLN formalism. Possible sign discrepancies between theoretical amplitude-level expressions and experimentally measurable intensity profiles are dealt with carefully. Our motivation is to provide a coherent framework for coupled-channel partial-wave analysis of several meson photoproduction reactions, incorporating recently published and forthcoming polarization data from Jefferson Lab.

  3. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    Science.gov (United States)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  4. Cumulative risk assessment of phthalate exposure of Danish children and adolescents using the hazard index approach

    DEFF Research Database (Denmark)

    Søeborg, T; Frederiksen, H; Andersson, Anna-Maria

    2012-01-01

    Human risk assessment of chemicals is traditionally presented as the ratio between the actual level of exposure and an acceptable level of exposure, with the acceptable level of exposure most often being estimated by appropriate authorities. This approach is generally sound when assessing the risk...... of individual chemicals. However, several chemicals may concurrently target the same receptor, work through the same mechanism or in other ways induce the same effect(s) in the body. In these cases, cumulative risk assessment should be applied. The present study uses biomonitoring data from 129 Danish children...... and adolescents and resulting estimated daily intakes of four different phthalates. These daily intake estimates are used for a cumulative risk assessment with anti-androgenic effects as the endpoint using Tolerable Daily Intake (TDI) values determined by the European Food Safety Authorities (EFSA) or Reference...

  5. The Decision Support Matrix (DSM) Approach to Reducing Risk of Flooding and Water Pollution in Farmed Landscapes

    Science.gov (United States)

    Hewett, Caspar J. M.; Quinn, Paul; Wilkinson, Mark

    2014-05-01

    Intense farming plays a key role in contributing to problems such as increased flood risk, soil erosion and poor water quality. This means that there is great potential for agricultural practitioners to play a major part in reducing multiple risks through better land-use management. Greater understanding by farmers, land managers, practitioners and policy-makers of the ways in which farmed landscapes contribute to risks and the ways in which those risks might be mitigated can be an essential component in improving practice. The Decision Support Matrix (DSM) approach involves the development of a range of visualization and communication tools to help compare the risks associated with different farming practices and explore options to manage runoff. DSMs are simple decision support systems intended for use by the non-expert which combine expert hydrological evidence with local knowledge of runoff patterns. They are developed through direct engagement with stakeholders, ensuring that the examples and language used makes sense to end-users. A key element of the tools is that they show the current conditions of the land and describe extremes of land-use management within a hydrological and agricultural land-management context. The tools include conceptual models of a series of pre-determined runoff scenarios, providing the end-user with a variety of potential land management practices and runoff management options. Visual examples of different farming practices are used to illustrate the impact of good and bad practice on specific problems such as nutrient export or risk of flooding. These show both how current conditions cause problems downstream and how systems are vulnerable to changes in climate and land-use intensification. The level of risk associated with a particular land management option is represented by a mapping on a two- or three-dimensional matrix. Interactive spreadsheet-based tools are developed in which multiple questions allow the user to explore

  6. Xplicit, a novel approach in probabilistic spatiotemporally explicit exposure and risk assessment for plant protection products.

    Science.gov (United States)

    Schad, Thorsten; Schulz, Ralf

    2011-10-01

    The quantification of risk (the likelihood and extent of adverse effects) is a prerequisite in regulatory decision making for plant protection products and is the goal of the Xplicit project. In its present development stage, realism is increased in the exposure assessment (EA), first by using real-world data on, e.g., landscape factors affecting exposure, and second, by taking the variability of key factors into account. Spatial and temporal variability is explicitly addressed. Scale dependencies are taken into account, which allows for risk quantification at different scales, for example, at landscape scale, an overall picture of the potential exposure of nontarget organisms can be derived (e.g., for all off-crop habitats in a given landscape); at local scale, exposure might be relevant to assess recovery and recolonization potential; intermediate scales might best refer to population level and hence might be relevant for risk management decisions (e.g., individual off-crop habitats). The Xplicit approach is designed to comply with a central paradigm of probabilistic approaches, namely, that each individual case that is derived from the variability functions employed should represent a potential real-world case. This is mainly achieved by operating in a spatiotemporally explicit fashion. Landscape factors affecting the local exposure of habitats of nontarget species (i.e., receptors) are derived from geodatabases. Variability in time is resolved by operating at discrete time steps, with the probability of events (e.g., application) or conditions (e.g., wind conditions) defined in probability density functions (PDFs). The propagation of variability of parameters into variability of exposure and risk is done using a Monte Carlo approach. Among the outcomes are expectancy values on the realistic worst-case exposure (predicted environmental concentration [PEC]), the probability p that the PEC exceeds the ecologically acceptable concentration (EAC) for a given

  7. Exposure to BPA in Children—Media-Based and Biomonitoring-Based Approaches

    Directory of Open Access Journals (Sweden)

    Krista L.Y. Christensen

    2014-04-01

    Full Text Available Bisphenol A (BPA is used in numerous industrial and consumer product applications resulting in ubiquitous exposure. Children’s exposure is of particular concern because of evidence of developmental effects. Childhood exposure is estimated for different age groups in two ways. The “forward” approach uses information on BPA concentrations in food and other environmental media (air, water, etc. combined with average contact rates for each medium. The “backward” approach relies on urinary biomonitoring, extrapolating backward to the intake which would have led to the observed biomarker level. The forward analysis shows that BPA intakes are dominated by canned food consumption, and that intakes are higher for younger ages. Mean intake estimates ranged from ~125 ng/kg-day for 1 year-olds to ~73 ng/kg-day among 16–20 years olds. Biomonitoring-based intakes show the same trend of lower intakes for older children, with an estimate of 121 (median to 153 (mean ng/kg-day for 2–6 years, compared with 33 (median to 53–66 (mean ng/kg-day for 16–20 years. Infant intakes were estimated to range from ~46 to 137 ng/kg-day. Recognizing uncertainties and limitations, this analysis suggests that the “forward” and “backward” methods provide comparable results and identify canned foods as a potentially important source of BPA exposure for children.

  8. Need for an integrated approach towards the assessment of radon, thoron and their progeny exposures

    International Nuclear Information System (INIS)

    Mayya, Y.S.

    2008-01-01

    Recent publications dealing with epidemiological studies on North American and European populations have indicated statistically significant lung cancer risk coefficients attributable to residential radon exposures. These are essentially based on radon gas itself as the quantitative measure of exposures. However, considering that true exposures depend upon the intricate mechanisms of decay product deposition in the lung, it is necessary to go for the assessment of decay products including their size distributions and deposition velocities. This approach is essential for assessing the risks of thoron and its decay products which is of considerable importance in the public domain and in the thorium fuel cycle. The recent development of deposition based progeny concentration measurement techniques appear to be best suited for radiological risk assessments both among occupational workers and general study populations. These provide an easy to wear alternative for radon inhalation dosimetry similar to TLDs for external gamma radiations. It is urgently required to characterize their performance under a variety of residential indoor and workplace conditions. This may be achieved through an integrated multi-parametric study programme involving measurements of radon, thoron and their progeny concentrations along with fine and coarse fractions and indoor source terms. This will not only in delineate the true exposure profiles and indoor parameters (e.g. deposition velocities and air exchange rates) in the country, but also will help in establishing deposition dosimetry as a basic technique for inhalation exposure estimations for occupational workers and subjects living in high background radiation areas

  9. An improved multi-exposure approach for high quality holographic femtosecond laser patterning

    International Nuclear Information System (INIS)

    Zhang, Chenchu; Hu, Yanlei; Li, Jiawen; Lao, Zhaoxin; Ni, Jincheng; Chu, Jiaru; Huang, Wenhao; Wu, Dong

    2014-01-01

    High efficiency two photon polymerization through single exposure via spatial light modulator (SLM) has been used to decrease the fabrication time and rapidly realize various micro/nanostructures, but the surface quality remains a big problem due to the speckle noise of optical intensity distribution at the defocused plane. Here, a multi-exposure approach which used tens of computer generate holograms successively loaded on SLM is presented to significantly improve the optical uniformity without losing efficiency. By applying multi-exposure, we found that the uniformity at the defocused plane was increased from ∼0.02 to ∼0.6 according to our simulation. The fabricated two series of letters “HELLO” and “USTC” under single-and multi-exposure in our experiment also verified that the surface quality was greatly improved. Moreover, by this method, several kinds of beam splitters with high quality, e.g., 2 × 2, 5 × 5 Daman, and complex nonseperate 5 × 5, gratings were fabricated with both of high quality and short time (<1 min, 95% time-saving). This multi-exposure SLM-two-photon polymerization method showed the promising prospect in rapidly fabricating and integrating various binary optical devices and their systems

  10. An improved multi-exposure approach for high quality holographic femtosecond laser patterning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chenchu; Hu, Yanlei, E-mail: huyl@ustc.edu.cn, E-mail: jwl@ustc.edu.cn; Li, Jiawen, E-mail: huyl@ustc.edu.cn, E-mail: jwl@ustc.edu.cn; Lao, Zhaoxin; Ni, Jincheng; Chu, Jiaru; Huang, Wenhao; Wu, Dong [Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026 (China)

    2014-12-01

    High efficiency two photon polymerization through single exposure via spatial light modulator (SLM) has been used to decrease the fabrication time and rapidly realize various micro/nanostructures, but the surface quality remains a big problem due to the speckle noise of optical intensity distribution at the defocused plane. Here, a multi-exposure approach which used tens of computer generate holograms successively loaded on SLM is presented to significantly improve the optical uniformity without losing efficiency. By applying multi-exposure, we found that the uniformity at the defocused plane was increased from ∼0.02 to ∼0.6 according to our simulation. The fabricated two series of letters “HELLO” and “USTC” under single-and multi-exposure in our experiment also verified that the surface quality was greatly improved. Moreover, by this method, several kinds of beam splitters with high quality, e.g., 2 × 2, 5 × 5 Daman, and complex nonseperate 5 × 5, gratings were fabricated with both of high quality and short time (<1 min, 95% time-saving). This multi-exposure SLM-two-photon polymerization method showed the promising prospect in rapidly fabricating and integrating various binary optical devices and their systems.

  11. An approach to controlling radiation exposures of probabilities less than one

    International Nuclear Information System (INIS)

    Ahmed, J.U.; Gonzalez, A.J.

    1988-01-01

    IAEA efforts to develop guidelines for a unified approach for the application of radiation protection principles to radiation exposures assumed to occur with certainty and exposures which are not certain to occur are discussed. A useful criterion is that of a limit on individual risk. A simple approach would be to define separate limits for normal and accident situations. For waste disposal ICRP has suggested a risk limit for accident situations to be of the same magnitude as that for normal operation. The IAEA is considering a risk limit of 10 -5 in a year for consistency with general safety standards of dose limitation. A source-related upper bound is needed which has to be apportioned from the risk limit in order to take into account the presence of other sources

  12. Radiofrequency exposure on fast patrol boats in the Royal Norwegian Navy--an approach to a dose assessment.

    Science.gov (United States)

    Baste, Valborg; Mild, Kjell Hansson; Moen, Bente E

    2010-07-01

    Epidemiological studies related to radiofrequency (RF) electromagnetic fields (EMF) have mainly used crude proxies for exposure, such as job titles, distance to, or use of different equipment emitting RF EMF. The Royal Norwegian Navy (RNoN) has measured RF field emitted from high-frequency antennas and radars on several spots where the crew would most likely be located aboard fast patrol boats (FPB). These boats are small, with short distance between the crew and the equipment emitting RF field. We have described the measured RF exposure aboard FPB and suggested different methods for calculations of total exposure and annual dose. Linear and spatial average in addition to percentage of ICNIRP and squared deviation of ICNIRP has been used. The methods will form the basis of a job exposure matrix where relative differences in exposure between groups of crew members can be used in further epidemiological studies of reproductive health. 2010 Wiley-Liss, Inc.

  13. Work characteristics and pesticide exposures among migrant agricultural families: a community-based research approach.

    OpenAIRE

    McCauley, L A; Lasarev, M R; Higgins, G; Rothlein, J; Muniz, J; Ebbert, C; Phillips, J

    2001-01-01

    There are few data on pesticide exposures of migrant Latino farmworker children, and access to this vulnerable population is often difficult. In this paper we describe a community-based approach to implement culturally appropriate research methods with a migrant Latino farmworker community in Oregon. Assessments were conducted in 96 farmworker homes and 24 grower homes in two agricultural communities in Oregon. Measurements included surveys of pesticide use and work protection practices and a...

  14. Impacts of sporulation temperature, exposure to compost matrix and temperature on survival of Bacillus cereus spores during livestock mortality composting.

    Science.gov (United States)

    Stanford, K; Reuter, T; Gilroyed, B H; McAllister, T A

    2015-04-01

    To investigate impact of sporulation and compost temperatures on feasibility of composting for disposal of carcasses contaminated with Bacillus anthracis. Two strains of B. cereus, 805 and 1391, were sporulated at either 20 or 37°C (Sporulation temperature, ST) and 7 Log10 CFU g(-1) spores added to autoclaved manure in nylon bags (pore size 50 μm) or in sealed vials. Vials and nylon bags were embedded into compost in either a sawdust or manure matrix each containing 16 bovine mortalities (average weight 617 ± 33 kg), retrieved from compost at intervals over 217 days and survival of B. cereus spores assessed. A ST of 20°C decreased spore survival by 1·4 log10 CFU g(-1) (P Compost temperatures >55°C reduced spore survival (P compost temperatures were key factors influencing survival of B. cereus spores in mortality compost. Composting may be most appropriate for the disposal of carcasses infected with B. anthracis at ambient temperatures ≤20°C under thermophillic composting conditions (>55°C). © 2015 The Society for Applied Microbiology.

  15. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  16. Effects of prior amphetamine exposure on approach strategy in appetitive Pavlovian conditioning in rats.

    Science.gov (United States)

    Simon, Nicholas W; Mendez, Ian A; Setlow, Barry

    2009-03-01

    Pavlovian conditioning with a discrete reward-predictive visual cue can elicit two classes of behaviors: "sign-tracking" (approach toward and contact with the cue) and "goal-tracking" (approach toward the site of reward delivery). Sign-tracking has been proposed to be linked to behavioral disorders involving compulsive reward-seeking, such as addiction. Prior exposure to psychostimulant drugs of abuse can facilitate reward-seeking behaviors through enhancements in incentive salience attribution. Thus, it was predicted that a sensitizing regimen of amphetamine exposure would increase sign-tracking behavior. The purpose of these experiments was to determine how a regimen of exposure to amphetamine affects subsequent sign-tracking behavior. Male Long-Evans rats were given daily injections of d-amphetamine (2.0 mg/kg) or saline for 5 days, then given a 7-day drug-free period followed by testing in a Pavlovian conditioning task. In experiment 1, rats were presented with a visual cue (simultaneous illumination of a light and extension of a lever) located either to the left or right of a centrally located food trough. One cue (CS+) was always followed by food delivery, whereas the other (CS-) was not. In experiment 2, rats were tested in a nondiscriminative (CS+ only) version of the task. In both experiments, amphetamine-exposed rats showed less sign-tracking and more goal-tracking compared to saline controls. Contrary to predictions, prior amphetamine exposure decreased sign-tracking and increased goal-tracking behavior. However, these results do support the hypothesis that psychostimulant exposure and incentive sensitization enhance behavior directed toward reward-proximal cues at the expense of reward-distal cues.

  17. Theoretical approach to embed nanocrystallites into a bulk crystalline matrix and the embedding influence on the electronic band structure and optical properties of the resulting heterostructures.

    Science.gov (United States)

    Balagan, Semyon Anatolyevich; Nazarov, Vladimir U; Shevlyagin, Alexander Vladimirovich; Goroshko, Dmitrii L; Galkin, N G

    2018-05-03

    We develop an approach and present results of the combined molecular dynamics and density functional theory calculations of the structural and optical properties of the nanometer-sized crystallites embedded in a bulk crystalline matrix. The method is designed and implemented for both compatible and incompatible lattices of the nanocrystallite (NC) and the host matrix, when determining the NC optimal orientation relative to the matrix constitutes a challenging problem. We suggest and substantiate an expression for the cost function of the search algorithm, which is the energy per supercell generalized for varying number of atoms in the latter. The epitaxial relationships at the Si/NC interfaces and the optical properties are obtained and found to be in a reasonable agreement with experimental data. Dielectric functions show significant sensitivity to the NC's orientation relative to the matrix at energies below 0.5 eV. © 2018 IOP Publishing Ltd.

  18. Theoretical approach to embed nanocrystallites into a bulk crystalline matrix and the embedding influence on the electronic band structure and optical properties of the resulting heterostructures

    Science.gov (United States)

    Balagan, Semyon A.; Nazarov, Vladimir U.; Shevlyagin, Alexander V.; Goroshko, Dmitrii L.; Galkin, Nikolay G.

    2018-06-01

    We develop an approach and present results of the combined molecular dynamics and density functional theory calculations of the structural and optical properties of the nanometer-sized crystallites embedded in a bulk crystalline matrix. The method is designed and implemented for both compatible and incompatible lattices of the nanocrystallite (NC) and the host matrix, when determining the NC optimal orientation relative to the matrix constitutes a challenging problem. We suggest and substantiate an expression for the cost function of the search algorithm, which is the energy per supercell generalized for varying number of atoms in the latter. The epitaxial relationships at the Si/NC interfaces and the optical properties are obtained and found to be in a reasonable agreement with experimental data. Dielectric functions show significant sensitivity to the NC’s orientation relative to the matrix at energies below 0.5 eV.

  19. Offering pre-exposure prophylaxis for HIV prevention to pregnant and postpartum women: a clinical approach.

    Science.gov (United States)

    Seidman, Dominika L; Weber, Shannon; Cohan, Deborah

    2017-03-08

    HIV prevention during pregnancy and lactation is critical for both maternal and child health. Pregnancy provides a critical opportunity for clinicians to elicit women's vulnerabilities to HIV and offer HIV testing, treatment and referral and/or comprehensive HIV prevention options for the current pregnancy, the postpartum period and safer conception options for future pregnancies. In this commentary, we review the safety of oral pre-exposure prophylaxis with tenofovir/emtricitabine in pregnant and lactating women and suggest opportunities to identify pregnant and postpartum women at substantial risk of HIV. We then describe a clinical approach to caring for women who both choose and decline pre-exposure prophylaxis during pregnancy and postpartum, highlighting areas for future research. Evidence suggests that pre-exposure prophylaxis with tenofovir/emtricitabine is safe in pregnancy and lactation. Identifying women vulnerable to HIV and eligible for pre-exposure prophylaxis is challenging in light of the myriad of individual, community, and structural forces impacting HIV acquisition. Validated risk calculators exist for specific populations but have not been used to screen and offer HIV prevention methods. Partner testing and engagement of men living with HIV are additional means of reaching at-risk women. However, women's vulnerabilities to HIV change over time. Combining screening for HIV vulnerability with HIV and/or STI testing at standard intervals during pregnancy is a practical way to prompt providers to incorporate HIV screening and prevention counselling. We suggest using shared decision-making to offer women pre-exposure prophylaxis as one of multiple HIV prevention strategies during pregnancy and postpartum, facilitating open conversations about HIV vulnerabilities, preferences about HIV prevention strategies, and choosing a method that best meets the needs of each woman. Growing evidence suggests that pre-exposure prophylaxis with tenofovir

  20. Combinational approach using solid dispersion and semi-solid matrix technology to enhance in vitro dissolution of telmisartan

    Directory of Open Access Journals (Sweden)

    Syed Faisal Ali

    2016-02-01

    Full Text Available The present investigation was focused to formulate semi-solid capsules (SSCs of hydrophobic drug telmisartan (TLMS by encapsulating semi-solid matrix of its solid dispersion (SD in HPMC capsules. The combinational approach was used to reduce the lag time in drug release and improvise its dissolution. SDs of TLMS was prepared using hot fusion method by varying the combinations of Pluronic-F68, Gelucire 50/13 and Plasdone S630. A total of nine batches (SD1-SD9 were characterized for micromeritic properties, in vitro dissolution behavior and surface characterization. SD4 with 52.43% cumulative drug release (CDR in phosphate buffer, pH 7.4, in 120 min, t50% 44.2 min and DE30min 96.76% was selected for the development of semi-solid capsules. Differential scanning calorimetry of SD4 revealed molecular dispersion of TLMS in Pluronic-F68. SD4 was formulated into SSCs using Gelucire 44/14 and PEG 400 as semi-solid components and PEG 6000 as a suspending agent to achieve reduction in lag time for effective drug dissolution. SSC6 showed maximum in vitro drug dissolution 97.49 % in phosphate buffer, pH 7.4 with in 20 min that was almost a three folds reduction in the time required to achieve similar dissolution by SD. Thus, SSCs present an excellent approach to enhance in vitro dissolution as well as to reduce the lag time of dissolution for poorly water soluble drugs especially to those therapeutic classes that are intended for faster onset of action. Developed approach based on HPMC capsules provided a better alternative to target delivery of telmisartan to the vegetarian population.

  1. Radiation dose optimization research: Exposure technique approaches in CR imaging – A literature review

    International Nuclear Information System (INIS)

    Seeram, Euclid; Davidson, Rob; Bushong, Stewart; Swan, Hans

    2013-01-01

    The purpose of this paper is to review the literature on exposure technique approaches in Computed Radiography (CR) imaging as a means of radiation dose optimization in CR imaging. Specifically the review assessed three approaches: optimization of kVp; optimization of mAs; and optimization of the Exposure Indicator (EI) in practice. Only papers dating back to 2005 were described in this review. The major themes, patterns, and common findings from the literature reviewed showed that important features are related to radiation dose management strategies for digital radiography include identification of the EI as a dose control mechanism and as a “surrogate for dose management”. In addition the use of the EI has been viewed as an opportunity for dose optimization. Furthermore optimization research has focussed mainly on optimizing the kVp in CR imaging as a means of implementing the ALARA philosophy, and studies have concentrated on mainly chest imaging using different CR systems such as those commercially available from Fuji, Agfa, Kodak, and Konica-Minolta. These studies have produced “conflicting results”. In addition, a common pattern was the use of automatic exposure control (AEC) and the measurement of constant effective dose, and the use of a dose-area product (DAP) meter

  2. Novel approach to integrated DNA adductomics for the assessment of in vitro and in vivo environmental exposures.

    Science.gov (United States)

    Chang, Yuan-Jhe; Cooke, Marcus S; Hu, Chiung-Wen; Chao, Mu-Rong

    2018-06-25

    Adductomics is expected to be useful in the characterization of the exposome, which is a new paradigm for studying the sum of environmental causes of diseases. DNA adductomics is emerging as a powerful method for detecting DNA adducts, but reliable assays for its widespread, routine use are currently lacking. We propose a novel integrated strategy for the establishment of a DNA adductomic approach, using liquid chromatography-triple quadrupole tandem mass spectrometry (LC-QqQ-MS/MS), operating in constant neutral loss scan mode, screening for both known and unknown DNA adducts in a single injection. The LC-QqQ-MS/MS was optimized using a representative sample of 23 modified 2'-deoxyribonucleosides reflecting a range of biologically relevant DNA lesions. Six internal standards (ISTDs) were evaluated for their ability to normalize, and hence correct, possible variation in peak intensities arising from matrix effects, and the quantities of DNA injected. The results revealed that, with appropriate ISTDs adjustment, any bias can be dramatically reduced from 370 to 8.4%. Identification of the informative DNA adducts was achieved by triggering fragmentation spectra of target ions. The LC-QqQ-MS/MS method was successfully applied to in vitro and in vivo studies to screen for DNA adducts formed following representative environmental exposures: methyl methanesulfonate (MMS) and five N-nitrosamines. Interestingly, five new DNA adducts, induced by MMS, were discovered using our adductomic approach-an added strength. The proposed integrated strategy provides a path forward for DNA adductomics to become a standard method to discover differences in DNA adduct fingerprints between populations exposed to genotoxins, and facilitate the field of exposomics.

  3. Integrated approach for characterizing and comparing exposure-based impacts with life cycle impacts

    DEFF Research Database (Denmark)

    Fantke, Peter; Jolliet, Olivier

    2016-01-01

    ions that involve burden shifting or that result in only incremental improvement. Focusing in the life cycle impacts on widely accepted and applied impact categories like global warming potential or cumulative energy demand aggregating several impact categories will lead to underestimations of life...... to the environment from product-related processes along the product life cycle. We build on a flexible mass balance-based modeling system yielding cumulative multimedia transfer fractions and exposure pathway-specific Product Intake Fractions defined as chemical mass taken in by humans per unit mass of chemical...... in a product. When combined chemical masses in products and further with toxicity information, this approach is a resourceful way to inform CAA and minimize human exposure to toxic chemicals in consumer products through both product use and environmental emissions. We use an example of chemicals in consumer...

  4. Information technology-based approaches to reducing repeat drug exposure in patients with known drug allergies.

    Science.gov (United States)

    Cresswell, Kathrin M; Sheikh, Aziz

    2008-05-01

    There is increasing interest internationally in ways of reducing the high disease burden resulting from errors in medicine management. Repeat exposure to drugs to which patients have a known allergy has been a repeatedly identified error, often with disastrous consequences. Drug allergies are immunologically mediated reactions that are characterized by specificity and recurrence on reexposure. These repeat reactions should therefore be preventable. We argue that there is insufficient attention being paid to studying and implementing system-based approaches to reducing the risk of such accidental reexposure. Drawing on recent and ongoing research, we discuss a number of information technology-based interventions that can be used to reduce the risk of recurrent exposure. Proven to be effective in this respect are interventions that provide real-time clinical decision support; also promising are interventions aiming to enhance patient recognition, such as bar coding, radiofrequency identification, and biometric technologies.

  5. A quantitative approach for pesticide analysis in grape juice by direct interfacing of a matrix compatible SPME phase to dielectric barrier discharge ionization-mass spectrometry.

    Science.gov (United States)

    Mirabelli, Mario F; Gionfriddo, Emanuela; Pawliszyn, Janusz; Zenobi, Renato

    2018-02-12

    We evaluated the performance of a dielectric barrier discharge ionization (DBDI) source for pesticide analysis in grape juice, a fairly complex matrix due to the high content of sugars (≈20% w/w) and pigments. A fast sample preparation method based on direct immersion solid-phase microextraction (SPME) was developed, and novel matrix compatible SPME fibers were used to reduce in-source matrix suppression effects. A high resolution LTQ Orbitrap mass spectrometer allowed for rapid quantification in full scan mode. This direct SPME-DBDI-MS approach was proven to be effective for the rapid and direct analysis of complex sample matrices, with limits of detection in the parts-per-trillion (ppt) range and inter- and intra-day precision below 30% relative standard deviation (RSD) for samples spiked at 1, 10 and 10 ng ml -1 , with overall performance comparable or even superior to existing chromatographic approaches.

  6. Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.

    Science.gov (United States)

    de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard

    2018-03-01

    Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.

  7. Comparative biology approaches for charged particle exposures and cancer development processes

    Science.gov (United States)

    Kronenberg, Amy; Gauny, Stacey; Kwoh, Ely; Sudo, Hiroko; Wiese, Claudia; Dan, Cristian; Turker, Mitchell

    Comparative biology studies can provide useful information for the extrapolation of results be-tween cells in culture and the more complex environment of the tissue. In other circumstances, they provide a method to guide the interpretation of results obtained for cells from differ-ent species. We have considered several key cancer development processes following charged particle exposures using comparative biology approaches. Our particular emphases have been mutagenesis and genomic instability. Carcinogenesis requires the accumulation of mutations and most of htese mutations occur on autosomes. Two loci provide the greatest avenue for the consideration of charged particle-induced mutation involving autosomes: the TK1 locus in human cells and the APRT locus in mouse cells. Each locus can provide information on a wide variety of mutational changes, from small intragenic mutations through multilocus dele-tions and extensive tracts of mitotic recombination. In addition, the mouse model can provide a direct measurement of chromosome loss which cannot be accomplished in the human cell system. Another feature of the mouse APRT model is the ability to examine effects for cells exposed in vitro with those obtained for cells exposed in situ. We will provide a comparison of the results obtained for the TK1 locus following 1 GeV/amu Fe ion exposures to the human lymphoid cells with those obtained for the APRT locus for mouse kidney epithelial cells (in vitro or in situ). Substantial conservation of mechanisms is found amongst these three exposure scenarios, with some differences attributable to the specific conditions of exposure. A similar approach will be applied to the consideraiton of proton-induced autosomal mutations in the three model systems. A comparison of the results obtained for Fe ions vs. protons in each case will highlight LET-specificc differences in response. Another cancer development process that is receiving considerable interest is genomic instability. We

  8. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    Science.gov (United States)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  9. Using a matrix-analytical approach to synthesizing evidence solved incompatibility problem in the hierarchy of evidence.

    Science.gov (United States)

    Walach, Harald; Loef, Martin

    2015-11-01

    The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A comparative study of first and all-author co-citation counting, and two different matrix generation approaches applied for author co-citation analyses

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg; Larsen, Birger; Ingwersen, Peter

    2009-01-01

    XML documents extracted from the IEEE collection. These data allow the construction of ad-hoc citation indexes, which enables us to carry out the hitherto largest all-author co-citation study. Four ACA are made, combining the different units of analyses with the different matrix generation approaches...

  11. Development and application of a 2-electron reduced density matrix approach to electron transport via molecular junctions

    Science.gov (United States)

    Hoy, Erik P.; Mazziotti, David A.; Seideman, Tamar

    2017-11-01

    Can an electronic device be constructed using only a single molecule? Since this question was first asked by Aviram and Ratner in the 1970s [Chem. Phys. Lett. 29, 277 (1974)], the field of molecular electronics has exploded with significant experimental advancements in the understanding of the charge transport properties of single molecule devices. Efforts to explain the results of these experiments and identify promising new candidate molecules for molecular devices have led to the development of numerous new theoretical methods including the current standard theoretical approach for studying single molecule charge transport, i.e., the non-equilibrium Green's function formalism (NEGF). By pairing this formalism with density functional theory (DFT), a wide variety of transport problems in molecular junctions have been successfully treated. For some systems though, the conductance and current-voltage curves predicted by common DFT functionals can be several orders of magnitude above experimental results. In addition, since density functional theory relies on approximations to the exact exchange-correlation functional, the predicted transport properties can show significant variation depending on the functional chosen. As a first step to addressing this issue, the authors have replaced density functional theory in the NEGF formalism with a 2-electron reduced density matrix (2-RDM) method, creating a new approach known as the NEGF-RDM method. 2-RDM methods provide a more accurate description of electron correlation compared to density functional theory, and they have lower computational scaling compared to wavefunction based methods of similar accuracy. Additionally, 2-RDM methods are capable of capturing static electron correlation which is untreatable by existing NEGF-DFT methods. When studying dithiol alkane chains and dithiol benzene in model junctions, the authors found that the NEGF-RDM predicts conductances and currents that are 1-2 orders of magnitude below

  12. Vapor-liquid phase behavior of a size-asymmetric model of ionic fluids confined in a disordered matrix: The collective-variables-based approach

    Science.gov (United States)

    Patsahan, O. V.; Patsahan, T. M.; Holovko, M. F.

    2018-02-01

    We develop a theory based on the method of collective variables to study the vapor-liquid equilibrium of asymmetric ionic fluids confined in a disordered porous matrix. The approach allows us to formulate the perturbation theory using an extension of the scaled particle theory for a description of a reference system presented as a two-component hard-sphere fluid confined in a hard-sphere matrix. Treating an ionic fluid as a size- and charge-asymmetric primitive model (PM) we derive an explicit expression for the relevant chemical potential of a confined ionic system which takes into account the third-order correlations between ions. Using this expression, the phase diagrams for a size-asymmetric PM are calculated for different matrix porosities as well as for different sizes of matrix and fluid particles. It is observed that general trends of the coexistence curves with the matrix porosity are similar to those of simple fluids under disordered confinement, i.e., the coexistence region gets narrower with a decrease of porosity and, simultaneously, the reduced critical temperature Tc* and the critical density ρi,c * become lower. At the same time, our results suggest that an increase in size asymmetry of oppositely charged ions considerably affects the vapor-liquid diagrams leading to a faster decrease of Tc* and ρi,c * and even to a disappearance of the phase transition, especially for the case of small matrix particles.

  13. Job strain and ischemic heart disease: a prospective study using a new approach for exposure assessment

    DEFF Research Database (Denmark)

    Bonde, Jens Peter; Munch-Hansen, Torsten; Agerbo, Esben

    2009-01-01

    BACKGROUND: Prolonged psychosocial load at the workplace may increase the risk of ischemic heart disease (IHD), but the issue is still unsettled. We analyzed the association between psychosocial workload and risk of IHD using a new approach allocating measures of psychosocial load to individuals...... based on the average exposure level in minor work units. METHODS: Cohort study of 18,258 Danish public service workers in 1106 work units; 79% were women; 108 subjects with history of cardiovascular disease were excluded from the follow-up. The outcome was hospitalization due to IHD (angina pectoris...

  14. Study of the validity of a job-exposure matrix for the job strain model factors: an update and a study of changes over time.

    Science.gov (United States)

    Niedhammer, Isabelle; Milner, Allison; LaMontagne, Anthony D; Chastang, Jean-François

    2018-03-08

    The objectives of the study were to construct a job-exposure matrix (JEM) for psychosocial work factors of the job strain model, to evaluate its validity, and to compare the results over time. The study was based on national representative data of the French working population with samples of 46,962 employees (2010 SUMER survey) and 24,486 employees (2003 SUMER survey). Psychosocial work factors included the job strain model factors (Job Content Questionnaire): psychological demands, decision latitude, social support, job strain and iso-strain. Job title was defined by three variables: occupation and economic activity coded using standard classifications, and company size. A JEM was constructed using a segmentation method (Classification and Regression Tree-CART) and cross-validation. The best quality JEM was found using occupation and company size for social support. For decision latitude and psychological demands, there was not much difference using occupation and company size with or without economic activity. The validity of the JEM estimates was higher for decision latitude, job strain and iso-strain, and lower for social support and psychological demands. Differential changes over time were observed for psychosocial work factors according to occupation, economic activity and company size. This study demonstrated that company size in addition to occupation may improve the validity of JEMs for psychosocial work factors. These matrices may be time-dependent and may need to be updated over time. More research is needed to assess the validity of JEMs given that these matrices may be able to provide exposure assessments to study a range of health outcomes.

  15. New exposure-based metric approach for evaluating O3 risk to North American aspen forests

    International Nuclear Information System (INIS)

    Percy, K.E.; Nosal, M.; Heilman, W.; Dann, T.; Sober, J.; Legge, A.H.; Karnosky, D.F.

    2007-01-01

    The United States and Canada currently use exposure-based metrics to protect vegetation from O 3 . Using 5 years (1999-2003) of co-measured O 3 , meteorology and growth response, we have developed exposure-based regression models that predict Populus tremuloides growth change within the North American ambient air quality context. The models comprised growing season fourth-highest daily maximum 8-h average O 3 concentration, growing degree days, and wind speed. They had high statistical significance, high goodness of fit, include 95% confidence intervals for tree growth change, and are simple to use. Averaged across a wide range of clonal sensitivity, historical 2001-2003 growth change over most of the 26 M ha P. tremuloides distribution was estimated to have ranged from no impact (0%) to strong negative impacts (-31%). With four aspen clones responding negatively (one responded positively) to O 3 , the growing season fourth-highest daily maximum 8-h average O 3 concentration performed much better than growing season SUM06, AOT40 or maximum 1 h average O 3 concentration metrics as a single indicator of aspen stem cross-sectional area growth. - A new exposure-based metric approach to predict O 3 risk to North American aspen forests has been developed

  16. A Margin-of-Exposure Approach to Assessment of Noncancer Risks of Dioxins Based on Human Exposure and Response Data

    OpenAIRE

    Aylward, Lesa L.; Goodman, Julie E.; Charnley, Gail; Rhomberg, Lorenz R.

    2008-01-01

    Background Risk assessment of human environmental exposure to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/PCDFs) and other dioxin-like compounds is complicated by several factors, including limitations in measuring intakes because of the low concentrations of these compounds in foods and the environment and interspecies differences in pharmacokinetics and responses. Objectives We examined the feasibility of relying directly on human studies of exposure and potential responses to...

  17. The conversion of exposures due to radon into the effective dose: the epidemiological approach

    Energy Technology Data Exchange (ETDEWEB)

    Beck, T.R. [Federal Office for Radiation Protection, Berlin (Germany)

    2017-11-15

    The risks and dose conversion coefficients for residential and occupational exposures due to radon were determined with applying the epidemiological risk models to ICRP representative populations. The dose conversion coefficient for residential radon was estimated with a value of 1.6 mSv year{sup -1} per 100 Bq m{sup -3} (3.6 mSv per WLM), which is significantly lower than the corresponding value derived from the biokinetic and dosimetric models. The dose conversion coefficient for occupational exposures with applying the risk models for miners was estimated with a value of 14 mSv per WLM, which is in good accordance with the results of the dosimetric models. To resolve the discrepancy regarding residential radon, the ICRP approaches for the determination of risks and doses were reviewed. It could be shown that ICRP overestimates the risk for lung cancer caused by residential radon. This can be attributed to a wrong population weighting of the radon-induced risks in its epidemiological approach. With the approach in this work, the average risks for lung cancer were determined, taking into account the age-specific risk contributions of all individuals in the population. As a result, a lower risk coefficient for residential radon was obtained. The results from the ICRP biokinetic and dosimetric models for both, the occupationally exposed working age population and the whole population exposed to residential radon, can be brought in better accordance with the corresponding results of the epidemiological approach, if the respective relative radiation detriments and a radiation-weighting factor for alpha particles of about ten are used. (orig.)

  18. The conversion of exposures due to radon into the effective dose: the epidemiological approach

    International Nuclear Information System (INIS)

    Beck, T.R.

    2017-01-01

    The risks and dose conversion coefficients for residential and occupational exposures due to radon were determined with applying the epidemiological risk models to ICRP representative populations. The dose conversion coefficient for residential radon was estimated with a value of 1.6 mSv year -1 per 100 Bq m -3 (3.6 mSv per WLM), which is significantly lower than the corresponding value derived from the biokinetic and dosimetric models. The dose conversion coefficient for occupational exposures with applying the risk models for miners was estimated with a value of 14 mSv per WLM, which is in good accordance with the results of the dosimetric models. To resolve the discrepancy regarding residential radon, the ICRP approaches for the determination of risks and doses were reviewed. It could be shown that ICRP overestimates the risk for lung cancer caused by residential radon. This can be attributed to a wrong population weighting of the radon-induced risks in its epidemiological approach. With the approach in this work, the average risks for lung cancer were determined, taking into account the age-specific risk contributions of all individuals in the population. As a result, a lower risk coefficient for residential radon was obtained. The results from the ICRP biokinetic and dosimetric models for both, the occupationally exposed working age population and the whole population exposed to residential radon, can be brought in better accordance with the corresponding results of the epidemiological approach, if the respective relative radiation detriments and a radiation-weighting factor for alpha particles of about ten are used. (orig.)

  19. A margin-of-exposure approach to assessment of noncancer risks of dioxins based on human exposure and response data.

    Science.gov (United States)

    Aylward, Lesa L; Goodman, Julie E; Charnley, Gail; Rhomberg, Lorenz R

    2008-10-01

    Risk assessment of human environmental exposure to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/PCDFs) and other dioxin-like compounds is complicated by several factors, including limitations in measuring intakes because of the low concentrations of these compounds in foods and the environment and interspecies differences in pharmacokinetics and responses. We examined the feasibility of relying directly on human studies of exposure and potential responses to PCDD/PCDFs and related compounds in terms of measured lipid-adjusted concentrations to assess margin of exposure (MOE) in a quantitative, benchmark dose (BMD)-based framework using representative exposure and selected response data sets. We characterize estimated central tendency and upper-bound general U.S. population lipid-adjusted concentrations of PCDD/PCDFs from the 1970s and early 2000s based on available data sets. Estimates of benchmark concentrations for three example responses of interest (induction of cytochrome P4501A2 activity, dental anomalies, and neonatal thyroid hormone alterations) were derived based on selected human studies. The exposure data sets indicate that current serum lipid concentrations in young adults are approximately 6- to 7-fold lower than 1970s-era concentrations. Estimated MOEs for each end point based on current serum lipid concentrations range from 100 for dental anomalies-approximately 6-fold greater than would have existed during the 1970s. Human studies of dioxin exposure and outcomes can be used in a BMD framework for quantitative assessments of MOE. Incomplete exposure characterization can complicate the use of such studies in a BMD framework.

  20. A volume of intersection approach for on-the-fly system matrix calculation in 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Lougovski, A; Hofheinz, F; Maus, J; Schramm, G; Will, E; Hoff, J van den

    2014-01-01

    The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR + and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34–41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques. (paper)

  1. On using of R-matrix approach for description of nucleon scattering by potential with diffuse edge

    International Nuclear Information System (INIS)

    Tertychnyj, G.Ya.; Yadrovskij, E.L.

    1982-01-01

    Problems of convergence of R-matrix method for calculation of scattering phases and bound states of neutrons in the Woods-Saxon potential are investigated. It is revealed that this convergence in respect to the number of R-matrix poles turns to be faster if the value of the parameter of boundary conditions bsub(ej)sup(0) is close to the value of logarithmic derivative of the function of continuous spectrum at given energy E and radius of joining a. Bound states are satisfactorily described in unipolar approximation in a wide range of energy and bsub(ej)sup(0) parameter variations. The conducted comparison of the R-matrix method with the method of numerical integration testifies to their equivalence irrespective of the choice of a and bsub(ej)sup(0) parameters, but under the condition that the R-matrix series comprises a large number of members

  2. A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.

    Science.gov (United States)

    Houseman, E Andres; Virji, M Abbas

    2017-08-01

    Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates

  3. Basal exposure therapy: A new approach for treatment resistant patients with severe and comorbid mental disorders

    Directory of Open Access Journals (Sweden)

    Didrik Heggdal

    2016-12-01

    Full Text Available New treatment approaches are needed for patients with severe and composite mental disorders who appear resistant to conventional treatments. Such treatment resistant patients often have diagnoses of psychotic or bipolar disorders or severe personality disorders and comorbid conditions. Here we evaluate Basal Exposure Therapy (BET, a novel ward-integrated psychotherapeutic approach for these patients. Central to BET is the conceptualization of undifferentiated existential fear as basic to the patients’ problem, exposure to this fear, and the therapeutic platform Complementary External Regulation (CER which integrates and governs the totality of interventions throughout the treatment process. BET is administered at a locked-door ward with six patient beds and 13.5 full time employees, including a psychiatrist and two psychologists. Thirty-eight patients who had completed BET were included, all but two being female, mean age 29.9 years. Fourteen patients had a diagnosis of schizophrenia or schizoaffective disorder (F20/25, eight had bipolar disorder or recurrent depressive disorder (F31/33, eight had diagnoses in the F40-49 domain (anxiety, stress, dissociation, five were diagnosed with emotionally unstable personality disorder (F60.3, and three patients had other diagnoses. Twenty of the patients (53% had more than one ICD-10 diagnosis. Average treatment time in BET was 13 months, ranging from 2 to 72 months. Time-series data show significant improvements in symptoms and functioning from enrolment to discharge, with effect sizes at 0.76 for the Dissociation Experience Scale, 0.93 for the Brief Symptom Inventory, 1.47 for the Avoidance and Action Questionnaire, and 1.42 and 1.56, respectively for the functioning and symptom subscales of the Global Assessment of Functioning Scale. In addition, the patients used significantly less antiepileptic, antipsychotic, anxiolytic and antidepressant medications at discharge than at treatment enrolment

  4. Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment

    Science.gov (United States)

    Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.

    2012-01-01

    Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757

  5. A New Approach to Study Properties of Isolated Predipocytes Following In Vivo Exposure to Hypoxia

    Science.gov (United States)

    Chowdhury, Helena H.; Velebit Markovic, Jelena; Radic, Natasa; Francic, Vito; Mekjavic, Igor B.; Eiken, Ola; Zorec, Robert

    2013-02-01

    In the present study we developed a novel approach to study the properties of isolated human preadipocytes from subjects exposed to conditions of hypoxia equivalent to an altitude of 4000 m. By using confocal microscopy we studied the expression of dipeptidyl peptidase 4 (DPP4) in preadipocytes from adult normal-weight males. DPP4 is a transmembrane glycoprotein with enzymatic activity that cleaves N-terminal dipeptides from a diverse range of substrates. The activity of DPP4 is implicated in immune response as well as in glucose homeostasis. To gain insights into the pathophysiological role of DPP4 in insulin resistance we here explored DPP4 expression during prolonged exposure to hypoxia, an experimental model of obesity onset. We used here a rapid method to isolate cells from biopsies and immunolabelled them with antibodies. Then cells were prepared for the analysis with confocal microscopy. The results show that a prolonged exposure to hypoxic environment appears to increases the expression of DPP4 on preadipocytes.

  6. Work characteristics and pesticide exposures among migrant agricultural families: a community-based research approach.

    Science.gov (United States)

    McCauley, L A; Lasarev, M R; Higgins, G; Rothlein, J; Muniz, J; Ebbert, C; Phillips, J

    2001-05-01

    There are few data on pesticide exposures of migrant Latino farmworker children, and access to this vulnerable population is often difficult. In this paper we describe a community-based approach to implement culturally appropriate research methods with a migrant Latino farmworker community in Oregon. Assessments were conducted in 96 farmworker homes and 24 grower homes in two agricultural communities in Oregon. Measurements included surveys of pesticide use and work protection practices and analyses of home-dust samples for pesticide residues of major organophosphates used in area crops. Results indicate that migrant farmworker housing is diverse, and the amounts and types of pesticide residues found in homes differ. Azinphos-methyl (AZM) was the pesticide residue found most often in both farmworker and grower homes. The median level of AZM in farmworker homes was 1.45 ppm compared to 1.64 ppm in the entry area of grower homes. The median level of AZM in the play areas of grower homes was 0.71 ppm. The levels of AZM in migrant farmworker homes were most associated with the distance from fields and the number of agricultural workers in the home. Although the levels of AZM in growers and farmworker homes were comparable in certain areas, potential for disproportionate exposures occur in areas of the homes where children are most likely to play. The relationship between home resident density, levels of pesticide residues, and play behaviors of children merit further attention.

  7. Bus drivers' exposure to bullying at work: an occupation-specific approach.

    Science.gov (United States)

    Glasø, Lars; Bele, Edvard; Nielsen, Morten Birkeland; Einarsen, Ståle

    2011-10-01

    The present study employs an occupation-specific approach to examine bus drivers' exposure to bullying and their trait anger, job engagement, job satisfaction and turnover intentions. A total of 1,023 bus drivers from a large public transport organization participated in the study. The findings show that bus driving can be a high risk occupation with regard to bullying, since 70% of the bus drivers had experienced one or more acts typical of bullying during the last six months. As many as 11% defined themselves as victims of bullying, 33% of whom (i.e. 3.6% of the total sample) see themselves as victims of frequent bullying. Colleagues were most frequently reported as perpetrators. Exposure to bullying was negatively related to job engagement and job satisfaction and positively related to turnover intentions. Job engagement and job satisfaction mediated the relationship between bullying and intention to leave, respectively. Trait anger had an interaction effect on the relationship between bullying and turnover intentions. This study indicates that workplace bullying has context-specific aspects that require increased use of context-specific policies and intervention methods. © 2011 The Authors. Scandinavian Journal of Psychology © 2011 The Scandinavian Psychological Associations.

  8. Reduction of occupational radiation exposure to staff - a quality management approach

    International Nuclear Information System (INIS)

    Crouch, J.

    2007-01-01

    Positron Emission. Tomography (PET) imaging has expanded in Australia in recent years and is a recognised technique to diagnose cancer, neurological and heart disease. The high-energy gamma rays (51 I KeV) produced from the annihilation reaction in PET and their increased penetration compared to Tc- 99 m (HOKeV) emissions results in a higher radiation exposure to staff compared to other types of imaging such as X-Ray, CT (computer tomography) and MR1 (magnetic resonance imaging) and general nuclear medicine. The project scope was to reduce the occupational radiation exposure to staff working within the imaging section of the WA PET/Cyclotron Service by utilising a continuous quality improvement, process. According to the Australian Council on Healthcare Standards (ACHS) continual quality improvement is critical for healthcare in Australia (The EQUIP Guide, 2002, p. 1-1 ). The continuous quality improvement approach selected is appropriate for the organisation and the PET imaging process based on the Evaluation and Quality Improvement Program (EQUIP) which is the recognised standard for the health care industry in Australia

  9. Cumulative health risk assessment: integrated approaches for multiple contaminants, exposures, and effects

    International Nuclear Information System (INIS)

    Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne

    2007-01-01

    Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)

  10. Experiencing a probabilistic approach to clarify and disclose uncertainties when setting occupational exposure limits.

    Science.gov (United States)

    Vernez, David; Fraize-Frontier, Sandrine; Vincent, Raymond; Binet, Stéphane; Rousselle, Christophe

    2018-03-15

    Assessment factors (AFs) are commonly used for deriving reference concentrations for chemicals. These factors take into account variabilities as well as uncertainties in the dataset, such as inter-species and intra-species variabilities or exposure duration extrapolation or extrapolation from the lowest-observed-adverse-effect level (LOAEL) to the noobserved- adverse-effect level (NOAEL). In a deterministic approach, the value of an AF is the result of a debate among experts and, often a conservative value is used as a default choice. A probabilistic framework to better take into account uncertainties and/or variability when setting occupational exposure limits (OELs) is presented and discussed in this paper. Each AF is considered as a random variable with a probabilistic distribution. A short literature was conducted before setting default distributions ranges and shapes for each AF commonly used. A random sampling, using Monte Carlo techniques, is then used for propagating the identified uncertainties and computing the final OEL distribution. Starting from the broad default distributions obtained, experts narrow it to its most likely range, according to the scientific knowledge available for a specific chemical. Introducing distribution rather than single deterministic values allows disclosing and clarifying variability and/or uncertainties inherent to the OEL construction process. This probabilistic approach yields quantitative insight into both the possible range and the relative likelihood of values for model outputs. It thereby provides a better support in decision-making and improves transparency. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  11. Comparative risk assessment of carcinogens in alcoholic beverages using the margin of exposure approach.

    Science.gov (United States)

    Lachenmeier, Dirk W; Przybylski, Maria C; Rehm, Jürgen

    2012-09-15

    Alcoholic beverages have been classified as carcinogenic to humans. As alcoholic beverages are multicomponent mixtures containing several carcinogenic compounds, a quantitative approach is necessary to compare the risks. Fifteen known and suspected human carcinogens (acetaldehyde, acrylamide, aflatoxins, arsenic, benzene, cadmium, ethanol, ethyl carbamate, formaldehyde, furan, lead, 4-methylimidazole, N-nitrosodimethylamine, ochratoxin A and safrole) occurring in alcoholic beverages were identified based on monograph reviews by the International Agency for Research on Cancer. The margin of exposure (MOE) approach was used for comparative risk assessment. MOE compares a toxicological threshold with the exposure. MOEs above 10,000 are judged as low priority for risk management action. MOEs were calculated for different drinking scenarios (low risk and heavy drinking) and different levels of contamination for four beverage groups (beer, wine, spirits and unrecorded alcohol). The lowest MOEs were found for ethanol (3.1 for low risk and 0.8 for heavy drinking). Inorganic lead and arsenic have average MOEs between 10 and 300, followed by acetaldehyde, cadmium and ethyl carbamate between 1,000 and 10,000. All other compounds had average MOEs above 10,000 independent of beverage type. Ethanol was identified as the most important carcinogen in alcoholic beverages, with clear dose response. Some other compounds (lead, arsenic, ethyl carbamate, acetaldehyde) may pose risks below thresholds normally tolerated for food contaminants, but from a cost-effectiveness point of view, the focus should be on reducing alcohol consumption in general rather than on mitigative measures for some contaminants that contribute only to a limited extent (if at all) to the total health risk. Copyright © 2012 UICC.

  12. Two-tier Haddon matrix approach to fault analysis of accidents and cybernetic search for relationship to effect operational control: a case study at a large construction site.

    Science.gov (United States)

    Mazumdar, Atmadeep; Sen, Krishna Nirmalya; Lahiri, Balendra Nath

    2007-01-01

    The Haddon matrix is a potential tool for recognizing hazards in any operating engineering system. This paper presents a case study of operational hazards at a large construction site. The fish bone structure helps to visualize and relate the chain of events, which led to the failure of the system. The two-tier Haddon matrix approach helps to analyze the problem and subsequently prescribes preventive steps. The cybernetic approach has been undertaken to establish the relationship among event variables and to identify the ones with most potential. Those event variables in this case study, based on the cybernetic concepts like control responsiveness and controllability salience, are (a) uncontrolled swing of sheet contributing to energy, (b) slippage of sheet from anchor, (c) restricted longitudinal and transverse swing or rotation about the suspension, (d) guilt or uncertainty of the crane driver, (e) safe working practices and environment.

  13. Unified approach to numerical transfer matrix methods for disordered systems: applications to mixed crystals and to elasticity percolation

    International Nuclear Information System (INIS)

    Lemieux, M.A.; Breton, P.; Tremblay, A.M.S.

    1985-01-01

    It is shown that the Negative Eigenvalue Theorem and transfer matrix methods may be considered within a unified framework and generalized to compute projected densities of states or, more generally, any linear combination of matrix elements of the inverse of large symmetric random matrices. As examples of applications, extensive simulations for one- and two-mode behaviour in the Raman spectrum of one-dimensional mixed crystals and a finite-size analysis of critical exponents for the central force percolation universality class are presented

  14. Endoscopically Assisted Drilling, Exposure of the Fundus through a Presigmoid Retrolabyrinthine Approach: A Cadaveric Feasibility Study.

    Science.gov (United States)

    Muelleman, Thomas; Shew, Matthew; Alvi, Sameer; Shah, Kushal; Staecker, Hinrich; Chamoun, Roukouz; Lin, James

    2018-01-01

    The presigmoid retrolabyrinthine approach to the cerebellopontine angle is traditionally described to not provide access to the internal auditory canal (IAC). We aimed to evaluate the extent of the IAC that could be exposed with endoscopically assisted drilling and to measure the percentage of the IAC that could be visualized with the microscope and various endoscopes after drilling had been completed. Presigmoid retrolabyrinthine approaches were performed bilaterally on 4 fresh cadaveric heads. We performed endoscopically assisted drilling to expose the fundus of the IAC, which resulted in exposure of the entire IAC in 8 of 8 temporal bone specimens. The microscope afforded a mean view of 83% (n = 8) of the IAC. The 0°, 30°, 45°, and 70° endoscope each afforded a view of 100% of the IAC in 8 of 8 temporal bone specimens. In conclusion, endoscopic drilling of the IAC of can provide an extradural means of exposing the entire length of the IAC while preserving the labyrinth.

  15. Optical excitation and electron relaxation dynamics at semiconductor surfaces: a combined approach of density functional and density matrix theory applied to the silicon (001) surface

    Energy Technology Data Exchange (ETDEWEB)

    Buecking, N

    2007-11-05

    In this work a new theoretical formalism is introduced in order to simulate the phononinduced relaxation of a non-equilibrium distribution to equilibrium at a semiconductor surface numerically. The non-equilibrium distribution is effected by an optical excitation. The approach in this thesis is to link two conventional, but approved methods to a new, more global description: while semiconductor surfaces can be investigated accurately by density-functional theory, the dynamical processes in semiconductor heterostructures are successfully described by density matrix theory. In this work, the parameters for density-matrix theory are determined from the results of density-functional calculations. This work is organized in two parts. In Part I, the general fundamentals of the theory are elaborated, covering the fundamentals of canonical quantizations as well as the theory of density-functional and density-matrix theory in 2{sup nd} order Born approximation. While the formalism of density functional theory for structure investigation has been established for a long time and many different codes exist, the requirements for density matrix formalism concerning the geometry and the number of implemented bands exceed the usual possibilities of the existing code in this field. A special attention is therefore attributed to the development of extensions to existing formulations of this theory, where geometrical and fundamental symmetries of the structure and the equations are used. In Part II, the newly developed formalism is applied to a silicon (001)surface in a 2 x 1 reconstruction. As first step, density-functional calculations using the LDA functional are completed, from which the Kohn-Sham-wave functions and eigenvalues are used to calculate interaction matrix elements for the electron-phonon-coupling an the optical excitation. These matrix elements are determined for the optical transitions from valence to conduction bands and for electron-phonon processes inside the

  16. Legislating for occupational exposure to sources of natural radiation- the UK approach

    International Nuclear Information System (INIS)

    Higham, N.; Walker, S.; Thomas, G.

    2004-01-01

    Title VII of EC Directive 96/29/Euratom (the 1996 BSS Directive) for the first time requires Member States to take action in relation to work activities within which the presence of natural radiation sources leads to a significant increase in the exposure of workers or members of the public which cannot be disregarded from the radiation protection point of view. The UK in fact has had legal requirements relating to occupational exposure to natural radiation sources since 1985, in the Ionising Radiations Regulations 1985, made to implement the bulk of the provisions of the previous BSS Directive (80/836/Euratom, as amended by 84/467/Euratom). The Ionising Radiations Regulations 1999, that implement the worker protection requirements of the 1996 Euratom BSS Directive, include similar provisions. The definition of radioactive substance includes any substance which contains one or more radionuclides whose activity cannot be disregarded for the purposes of radiation protection. This means that some low specific activity ores and sands fall within this definition and are therefore subject to relevant requirements of the Regulations. Further advice is given on circumstances in which this may apply. Radon is covered more explicitly by applying the regulations to any work carried out in an atmosphere containing radon 222 gas at a concentration in air, averaged over any 24 hour period, exceeding 400 Bq m-3 except where the concentration of the short-lived daughters of radon 222 in air averaged over any 8 hour working period does not exceed 6.24 x 10-7Jm-3. The Health and Safety Executive pursues a policy of raising awareness of the potential for exposure to radon in the workplace and targeting those employers likely to have a radon problem (based on the use of existing information on homes). The regulatory approach has been to seek remedial building measures so that the workplace is removed from control. HSE is able to offer advice about getting their workplace tested and

  17. Exposure to pesticides of fruit growers and effects on reproduction : an epidemiological approach

    NARCIS (Netherlands)

    Cock, de J.S.

    1995-01-01

    In this thesis the exposure to pesticides of fruit growers in The Netherlands was studied as well as its relation to reproductive health effects. The most commonly used fungicide, captan, was used as a marker for exposure. Several exposure studies were carried out during application of

  18. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    Science.gov (United States)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  19. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  20. Modelling the diffusion-available pore space of an unaltered granitic rock matrix using a micro-DFN approach

    Science.gov (United States)

    Svensson, Urban; Löfgren, Martin; Trinchero, Paolo; Selroos, Jan-Olof

    2018-04-01

    In sparsely fractured rock, the ubiquitous heterogeneity of the matrix, which has been observed in different laboratory and in situ experiments, has been shown to have a significant influence on retardation mechanisms that are of importance for the safety of deep geological repositories for nuclear waste. Here, we propose a conceptualisation of a typical heterogeneous granitic rock matrix based on micro-Discrete Fracture Networks (micro-DFN). Different sets of fractures are used to represent grain-boundary pores as well as micro fractures that transect different mineral grains. The micro-DFN model offers a great flexibility in the way inter- and intra-granular space is represented as the different parameters that characterise each fracture set can be fine tuned to represent samples of different characteristics. Here, the parameters of the model have been calibrated against experimental observations from granitic rock samples taken at Forsmark (Sweden) and different variant cases have been used to illustrate how the model can be tied to rock samples with different attributes. Numerical through-diffusion simulations have been carried out to infer the bulk properties of the model as well as to compare the computed mass flux with the experimental data from an analogous laboratory experiment. The general good agreement between the model results and the experimental observations shows that the model presented here is a reliable tool for the understanding of retardation mechanisms occurring at the mm-scale in the matrix.

  1. Sol-gel approach to the novel organic-inorganic hybrid composite films with ternary europium complex covalently bonded with silica matrix

    International Nuclear Information System (INIS)

    Dong Dewen; Yang Yongsheng; Jiang Bingzheng

    2006-01-01

    Novel organic-inorganic hybrid composite films with ternary lanthanide complex covalently bonded with silica matrix were prepared in situ via co-ordination of N-(3-propyltriethoxysilane)-4-carboxyphthalimide (TAT) and 1,10-phenanthroline (Phen) with europium ion (Eu 3+ ) during a sol-gel approach and characterized by the means of spectrofluorimeter, phosphorimeter and infrared spectrophotometer (FTIR). The resulting transparent films showed improved photophysical properties, i.e. increased luminescence intensity and longer luminescence lifetime, compared with the corresponding binary composite films without Phen. All the results revealed that the intense luminescence of the composite film was attributed to the efficient energy transfer from ligands, especially Phen, to chelated Eu 3+ and the reduced non-radiation through the rigid silica matrix and 'site isolation'

  2. Radiological monitoring of workers exposure - White book. A multidisciplinary collective approach for a shared vision

    International Nuclear Information System (INIS)

    Barbey, Pierre; Gauron, Christine; LAHAYE, Thierry; Le-Sourd-Thebaud, Viviane; Godet, Jean-Luc; Bardelay, Chantal; PETIT, Sylvain; Vial, Eric; Vallet, Jeremie; Michel Dit Laboelle, Nicolas; Samain, Jean-Paul; Roy, Catherine; Gonin, Michele; Lallier, Michel

    2015-06-01

    access to qualified people in radiation protection (PCR), in order to promote their reactivity and enhance their role in risk prevention; this involves redefining its legal status leading to increased responsibility; - An opening toward more relevant, appropriate radiological exposure monitoring methods, ensuring their operational, applicable and manageable nature; - The implementation of sector guides - developed by the relevant radiation protection stakeholders and approved by the Authorities - defining the means of achieving the general objectives. This graduated approach is part of the general simplification process led by the French Government. Its regulatory declination must enable stakeholders to identify implementation ways that do not call into question the employer's primary responsibility in the occupational risk prevention. (authors)

  3. In situ exposures using caged organisms: a multi-compartment approach to detect aquatic toxicity and bioaccumulation

    International Nuclear Information System (INIS)

    Burton, G. Allen; Greenberg, Marc S.; Rowland, Carolyn D.; Irvine, Cameron A.; Lavoie, Daniel R.; Brooker, John A.; Moore, Laurie; Raymer, Delia F.N.; McWilliam, Ruth A.

    2005-01-01

    An in situ toxicity and bioaccumulation assessment approach is described to assess stressor exposure and effects in surface waters (low and high flow), the sediment-water interface, surficial sediments and pore waters (including groundwater upwellings). This approach can be used for exposing species, representing major functional and taxonomic groups. Pimephales promelas, Daphnia magna, Ceriodaphnia dubia, Hyalella azteca, Hyalella sp., Chironomus tentans, Lumbriculus variegatus, Hydra attenuatta, Hexagenia sp. and Baetis tibialis were successfully used to measure effects on survival, growth, feeding, and/or uptake. Stressors identified included chemical toxicants, suspended solids, photo-induced toxicity, indigenous predators, and flow. Responses varied between laboratory and in situ exposures in many cases and were attributed to differing exposure dynamics and sample-processing artifacts. These in situ exposure approaches provide unique assessment information that is complementary to traditional laboratory-based toxicity and bioaccumulation testing and reduce the uncertainties of extrapolating from the laboratory to field responses. - In situ exposures provide unique information that is complementary to traditional lab-based toxicity results

  4. Use of the Materials Genome Initiative (MGI approach in the design of improved-performance fiber-reinforced SiC/SiC ceramic-matrix composites (CMCs

    Directory of Open Access Journals (Sweden)

    Jennifer S. Snipes

    2016-07-01

    Full Text Available New materials are traditionally developed using costly and time-consuming trial-and-error experimental efforts. This is followed by an even lengthier material-certification process. Consequently, it takes 10 to 20 years before a newly-discovered material is commercially employed. An alternative approach to the development of new materials is the so-called materials-by-design approach within which a material is treated as a complex hierarchical system, and its design and optimization is carried out by employing computer-aided engineering analyses, predictive tools and available material databases. In the present work, the materials-by-design approach is utilized to design a grade of fiber-reinforced (FR SiC/SiC ceramic matrix composites (CMCs, the type of materials which are currently being used in stationary components, and are considered for use in rotating components, of the hot sections of gas-turbine engines. Towards that end, a number of mathematical functions and numerical models are developed which relate CMC constituents’ (fibers, fiber coating and matrix microstructure and their properties to the properties and performance of the CMC as a whole. To validate the newly-developed materials-by-design approach, comparisons are made between experimentally measured and computationally predicted selected CMC mechanical properties. Then an optimization procedure is employed to determine the chemical makeup and processing routes for the CMC constituents so that the selected mechanical properties of the CMCs are increased to a preset target level.

  5. An Integrated Approach to Assess Exposure and Health-Risk from Polycyclic Aromatic Hydrocarbons (PAHs in a Fastener Manufacturing Industry

    Directory of Open Access Journals (Sweden)

    Hsin-I Hsu

    2014-09-01

    Full Text Available An integrated approach was developed to assess exposure and health-risk from polycyclic aromatic hydrocarbons (PAHs contained in oil mists in a fastener manufacturing industry. One previously developed model and one new model were adopted for predicting oil mist exposure concentrations emitted from metal work fluid (MWF and PAHs contained in MWF by using the fastener production rate (Pr and cumulative fastener production rate (CPr as predictors, respectively. By applying the annual Pr and CPr records to the above two models, long-term workplace PAH exposure concentrations were predicted. In addition, true exposure data was also collected from the field. The predicted and measured concentrations respectively served as the prior and likelihood distributions in the Bayesian decision analysis (BDA, and the resultant posterior distributions were used to determine the long-term exposure and health-risks posed on workers. Results show that long term exposures to PAHs would result in a 3.1%, 96.7%, and 73.4% chance of exceeding the PEL-TWA (0.2 mg/m3, action level (0.1 mg/m3, and acceptable health risk (10−3, respectively. In conclusion, preventive measures should be taken immediately to reduce workers’ PAH exposures.

  6. Modeling approaches for characterizing and evaluating environmental exposure to engineered nanomaterials in support of risk-based decision making.

    Science.gov (United States)

    Hendren, Christine Ogilvie; Lowry, Michael; Grieger, Khara D; Money, Eric S; Johnston, John M; Wiesner, Mark R; Beaulieu, Stephen M

    2013-02-05

    As the use of engineered nanomaterials becomes more prevalent, the likelihood of unintended exposure to these materials also increases. Given the current scarcity of experimental data regarding fate, transport, and bioavailability, determining potential environmental exposure to these materials requires an in depth analysis of modeling techniques that can be used in both the near- and long-term. Here, we provide a critical review of traditional and emerging exposure modeling approaches to highlight the challenges that scientists and decision-makers face when developing environmental exposure and risk assessments for nanomaterials. We find that accounting for nanospecific properties, overcoming data gaps, realizing model limitations, and handling uncertainty are key to developing informative and reliable environmental exposure and risk assessments for engineered nanomaterials. We find methods suited to recognizing and addressing significant uncertainty to be most appropriate for near-term environmental exposure modeling, given the current state of information and the current insufficiency of established deterministic models to address environmental exposure to engineered nanomaterials.

  7. Electronic Cigarettes and Indoor Air Quality: A Simple Approach to Modeling Potential Bystander Exposures to Nicotine

    Science.gov (United States)

    Colard, Stéphane; O’Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D.

    2014-01-01

    There has been rapid growth in the use of electronic cigarettes (“vaping”) in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents. PMID:25547398

  8. Influence of the temperature and oxygen exposure in red Port wine: A kinetic approach.

    Science.gov (United States)

    Oliveira, Carla Maria; Barros, António S; Silva Ferreira, António César; Silva, Artur M S

    2015-09-01

    Although phenolics are recognized to be related with health benefits by limiting lipid oxidation, in wine, they are the primary substrates for oxidation resulting in the quinone by-products with the participation of transition metal ions. Nevertheless, high quality Port wines require a period of aging in either bottle or barrels. During this time, a modification of sensory properties of wines such as the decrease of astringency or the stabilization of color is recognized to phenolic compounds, mainly attributed to anthocyanins and derived pigments. The present work aims to illustrate the oxidation of red Port wine based on its phenolic composition by the effect of both thermal and oxygen exposures. A kinetic approach toanthocyanins degradation was also achieved. For this purpose a forced red Port wine aging protocol was performed at four different storage temperatures, respectively, 20, 30, 35 and 40°C, and two adjusted oxygen saturation levels, no oxygen addition (treatment I), and oxygen addition (treatment II). Three hydroxycinnamic esters, three hydroxycinnamic acids, three hydroxybenzoic acids, two flavan-3-ols, and six anthocyanins were quantitated weekly during 63days, along with oxygen consumption. The most relevant phenolic oxidation markers were anthocyanins and catechin-type flavonoids, which had the highest decreases during the thermal and oxidative red Port wine process. Both temperature and oxygen treatments affected the rate of phenolic degradation. In addition, temperature seems to influence mostly the phenolics kinetic degradation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Electronic Cigarettes and Indoor Air Quality: A Simple Approach to Modeling Potential Bystander Exposures to Nicotine

    Directory of Open Access Journals (Sweden)

    Stéphane Colard

    2014-12-01

    Full Text Available There has been rapid growth in the use of electronic cigarettes (“vaping” in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents.

  10. Measuring combined exposure to environmental pressures in urban areas: an air quality and noise pollution assessment approach.

    Science.gov (United States)

    Vlachokostas, Ch; Achillas, Ch; Michailidou, A V; Moussiopoulos, Nu

    2012-02-01

    This study presents a methodological scheme developed to provide a combined air and noise pollution exposure assessment based on measurements from personal portable monitors. Provided that air and noise pollution are considered in a co-exposure approach, they represent a significant environmental hazard to public health. The methodology is demonstrated for the city of Thessaloniki, Greece. The results of an extensive field campaign are presented and the variations in personal exposure between modes of transport, routes, streets and transport microenvironments are evaluated. Air pollution and noise measurements were performed simultaneously along several commuting routes, during the morning and evening rush hours. Combined exposure to environmental pollutants is highlighted based on the Combined Exposure Factor (CEF) and Combined Dose and Exposure Factor (CDEF). The CDEF takes into account the potential relative uptake of each pollutant by considering the physical activities of each citizen. Rather than viewing environmental pollutants separately for planning and environmental sustainability considerations, the possibility of an easy-to-comprehend co-exposure approach based on these two indices is demonstrated. Furthermore, they provide for the first time a combined exposure assessment to these environmental pollutants for Thessaloniki and in this sense they could be of importance for local public authorities and decision makers. A considerable environmental burden for the citizens of Thessaloniki, especially for VOCs and noise pollution levels is observed. The material herein points out the importance of measuring public health stressors and the necessity of considering urban environmental pollution in a holistic way. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Photo double-ionization of helium: a new approach combining R matrix and semiclassical techniques in an hyperspherical framework

    International Nuclear Information System (INIS)

    Malegat, L.; Kazansky, A.; Selles, P.

    1999-01-01

    We introduce a new method for computing photo double ionization (PDI) cross sections for two electron atoms. It is formulated in terms of the hyperspherical radius R and relies upon a combination of R matrix techniques in the inner region R≤R 0 with a semiclassical approximation for the R motion in the outer region. We present a first application of this method to the PDI of He within a model of reduced dimensionality where r 1 =r 2 . It demonstrates the validity of our numerical scheme and provides a first quantitative estimate of the energy domain of validity of the Wannier mechanism. (orig.)

  12. Workshop. Assessment of Occupational and Environmental Exposure to Genotoxic Substances - a Methodological Approach

    International Nuclear Information System (INIS)

    2000-01-01

    During the workshop various works concerning radiobiology, environmental and occupational medicine were presented. Exposure to genotoxic and carcinogenic agents, like ionizing radiation, aromatic hydrocarbons, herbicides, pesticides was investigated

  13. Risks for the development of outcomes related to occupational allergies: an application of the asthma-specific job exposure matrix compared with self-reports and investigator scores on job-training-related exposure.

    NARCIS (Netherlands)

    Suarthana, E.; Heederik, D.J.J.; Ghezzo, H.; Malo, J.L.; Kennedy, S.M.; Gautrin, D.

    2009-01-01

    BACKGROUND AND AIM: Risks for development of occupational sensitisation, bronchial hyper-responsiveness, rhinoconjunctival and chest symptoms at work associated with continued exposure to high molecular weight (HMW) allergens were estimated with three exposure assessment methods. METHODS: A Cox

  14. New approach to nonleptonic weak interactions. I. Derivation of asymptotic selection rules for the two-particle weak ground-state-hadron matrix elements

    International Nuclear Information System (INIS)

    Tanuma, T.; Oneda, S.; Terasaki, K.

    1984-01-01

    A new approach to nonleptonic weak interactions is presented. It is argued that the presence and violation of the Vertical BarΔIVertical Bar = 1/2 rule as well as those of the quark-line selection rules can be explained in a unified way, along with other fundamental physical quantities [such as the value of g/sub A/(0) and the smallness of the isoscalar nucleon magnetic moments], in terms of a single dynamical asymptotic ansatz imposed at the level of observable hadrons. The ansatz prescribes a way in which asymptotic flavor SU(N) symmetry is secured levelwise for a certain class of chiral algebras in the standard QCD model. It yields severe asymptotic constraints upon the two-particle hadronic matrix elements of nonleptonic weak Hamiltonians as well as QCD currents and their charges. It produces for weak matrix elements the asymptotic Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart for the ground-state hadrons, while for strong matrix elements quark-line-like approximate selection rules. However, for the less important weak two-particle vertices involving higher excited states, the Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart are in general violated, providing us with an explicit source of the violation of these selection rules in physical processes

  15. Model of a tunneling current in a p-n junction based on armchair graphene nanoribbons - an Airy function approach and a transfer matrix method

    International Nuclear Information System (INIS)

    Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny

    2014-01-01

    We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature

  16. Statistical Analysis of the Figure of Merit of a Two-Level Thermoelectric System: A Random Matrix Approach

    KAUST Repository

    Abbout, Adel

    2016-08-05

    Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.

  17. Statistical Analysis of the Figure of Merit of a Two-Level Thermoelectric System: A Random Matrix Approach

    KAUST Repository

    Abbout, Adel; Ouerdane, Henni; Goupil, Christophe

    2016-01-01

    Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.

  18. Noninvasive Biomonitoring Approaches to Determine Dosimetry and Risk Following Acute Chemical Exposure: Analysis of Lead or Organophosphate Insecticide in Saliva

    International Nuclear Information System (INIS)

    Timchalk, Chuck; Poet, Torka S.; Kousba, Ahmed A.; Campbell, James A.; Lin, Yuehe

    2004-01-01

    There is a need to develop approaches for assessing risk associated with acute exposures to a broad-range of chemical agents and to rapidly determine the potential implications to human health. Non-invasive biomonitoring approaches are being developed using reliable portable analytical systems to quantitate dosimetry utilizing readily obtainable body fluids, such as saliva. Saliva has been used to evaluate a broad range of biomarkers, drugs, and environmental contaminants including heavy metals and pesticides. To advance the application of non-invasive biomonitoring a microfluidic/ electrochemical device has also been developed for the analysis of lead (Pb), using square wave anodic stripping voltammetry. The system demonstrates a linear response over a broad concentration range (1 2000 ppb) and is capable of quantitating saliva Pb in rats orally administered acute doses of Pb-acetate. Appropriate pharmacokinetic analyses have been used to quantitate systemic dosimetry based on determination of saliva Pb concentrations. In addition, saliva has recently been used to quantitate dosimetry following exposure to the organophosphate insecticide chlorpyrifos in a rodent model system by measuring the major metabolite, trichloropyridinol, and saliva cholinesterase inhibition following acute exposures. These results suggest that technology developed for non-invasive biomonitoring can provide a sensitive, and portable analytical tool capable of assessing exposure and risk in real-time. By coupling these non-invasive technologies with pharmacokinetic modeling it is feasible to rapidly quantitate acute exposure to a broad range of chemical agents. In summary, it is envisioned that once fully developed, these monitoring and modeling approaches will be useful for accessing acute exposure and health risk

  19. How to statistically analyze nano exposure measurement results: Using an ARIMA time series approach

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fransman, W.; Brouwer, D.H.

    2011-01-01

    Measurement strategies for exposure to nano-sized particles differ from traditional integrated sampling methods for exposure assessment by the use of real-time instruments. The resulting measurement series is a time series, where typically the sequential measurements are not independent from each

  20. Children's exposure to harmful elements in toys and low-cost jewelry: Characterizing risks and developing a comprehensive approach

    International Nuclear Information System (INIS)

    Guney, Mert; Zagury, Gerald J.

    2014-01-01

    Highlights: • Risk for children up to 3 years-old was characterized considering oral exposure. • Saliva mobilization, ingestion of parts and of scraped-off material were considered. • Ingestion of parts caused hazard index (HI) values >>for Cd, Ni, and Pb exposure. • HI were lower (but > for saliva mobilization and 1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ

  1. Children's exposure to harmful elements in toys and low-cost jewelry: characterizing risks and developing a comprehensive approach.

    Science.gov (United States)

    Guney, Mert; Zagury, Gerald J

    2014-04-30

    Contamination problem in jewelry and toys and children's exposure possibility have been previously demonstrated. For this study, risk from oral exposure has been characterized for highly contaminated metallic toys and jewelry ((MJ), n=16) considering three scenarios. Total and bioaccessible concentrations of Cd, Cu, Ni, and Pb were high in selected MJ. First scenario (ingestion of parts or pieces) caused unacceptable risk for eight items for Cd, Ni, and/or Pb (hazard index (HI)>1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Concentrations versus amounts of biomarkers in urine: a comparison of approaches to assess pyrethroid exposure

    Directory of Open Access Journals (Sweden)

    Bouchard Michèle

    2008-11-01

    Full Text Available Abstract Background Assessment of human exposure to non-persistent pesticides such as pyrethroids is often based on urinary biomarker measurements. Urinary metabolite levels of these pesticides are usually reported in volume-weighted concentrations or creatinine-adjusted concentrations measured in spot urine samples. It is known that these units are subject to intra- and inter-individual variations. This research aimed at studying the impact of these variations on the assessment of pyrethroid absorbed doses at individual and population levels. Methods Using data obtained from various adult and infantile populations, the intra and inter-individual variability in the urinary flow rate and creatinine excretion rate was first estimated. Individual absorbed doses were then calculated using volume-weighted or creatinine-adjusted concentrations according to published approaches and compared to those estimated from the amounts of biomarkers excreted in 15- or 24-h urine collections, the latter serving as a benchmark unit. The effect of the units of measurements (volume-weighted or creatinine adjusted concentrations or 24-h amounts on results of the comparison of pyrethroid biomarker levels between two populations was also evaluated. Results Estimation of daily absorbed doses of permethrin from volume-weighted or creatinine-adjusted concentrations of biomarkers was found to potentially lead to substantial under or overestimation when compared to doses reconstructed directly from amounts excreted in urine during a given period of time (-70 to +573% and -83 to +167%, respectively. It was also shown that the variability in creatinine excretion rate and urinary flow rate may introduce a bias in the case of between population comparisons. Conclusion The unit chosen to express biomonitoring data may influence the validity of estimated individual absorbed dose as well as the outcome of between population comparisons.

  3. Multi-parametric approach towards the assessment of radon and thoron progeny exposures

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Rosaline, E-mail: rosaline@barc.gov.in, E-mail: rosaline.mishra@gmail.com; Sapra, B. K. [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Mayya, Y. S. [Indian Institute of Technology, Mumbai (India)

    2014-02-15

    Conventionally, the dosimetry is carried out using radon and thoron gas concentration measurements and doses have been assigned using assumed equilibrium factors for the progeny species, which is inadequate pertaining to the variations in equilibrium factors and possibly due to significant thoron. In fact, since the true exposures depend upon the intricate mechanisms of progeny deposition in the lung, therefore an integrated approach for the assessment of progeny is essential. In this context, the recently developed deposition based progeny concentration measurement techniques (DTPS: Direct Thoron progeny sensors and DRPS: Direct Radon progeny sensors) appear to be best suited for radiological risk assessments both among occupational workers and general study populations. DTPS and DRPS consist of aluminized mylar mounted LR115 type passive detectors, which essentially detects the alpha particles emitted from the deposited progeny atoms on the detector surface. It gives direct measure of progeny activity concentrations in air. DTPS has a lower limit of detection limit of 0.1 Bq/m{sup 3} whereas that for DRPS is 1 Bq/m{sup 3}, hence are perfectly suitable for indoor environments. These DTPS and DRPS can be capped with 200-mesh type wire-screen to measure the coarse fraction of the progeny concentration and the corresponding coarse fraction deposition velocities as well as the time integrated fine fraction. DTPS and DRPS can also be lodged in an integrated sampler wherein the wire-mesh and filter-paper are arranged in an array in flow-mode, to measure the fine and coarse fraction concentration separately and simultaneously. The details are further discussed in the paper.

  4. The current status of exposure-driven approaches for chemical safety assessment: A cross-sector perspective.

    Science.gov (United States)

    Sewell, Fiona; Aggarwal, Manoj; Bachler, Gerald; Broadmeadow, Alan; Gellatly, Nichola; Moore, Emma; Robinson, Sally; Rooseboom, Martijn; Stevens, Alexander; Terry, Claire; Burden, Natalie

    2017-08-15

    For the purposes of chemical safety assessment, the value of using non-animal (in silico and in vitro) approaches and generating mechanistic information on toxic effects is being increasingly recognised. For sectors where in vivo toxicity tests continue to be a regulatory requirement, there has been a parallel focus on how to refine studies (i.e. reduce suffering and improve animal welfare) and increase the value that in vivo data adds to the safety assessment process, as well as where to reduce animal numbers where possible. A key element necessary to ensure the transition towards successfully utilising both non-animal and refined safety testing is the better understanding of chemical exposure. This includes approaches such as measuring chemical concentrations within cell-based assays and during in vivo studies, understanding how predicted human exposures relate to levels tested, and using existing information on human exposures to aid in toxicity study design. Such approaches promise to increase the human relevance of safety assessment, and shift the focus from hazard-driven to risk-driven strategies similar to those used in the pharmaceutical sectors. Human exposure-based safety assessment offers scientific and 3Rs benefits across all sectors marketing chemical or medicinal products. The UK's National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) convened an expert working group of scientists across the agrochemical, industrial chemical and pharmaceutical industries plus a contract research organisation (CRO) to discuss the current status of the utilisation of exposure-driven approaches, and the challenges and potential next steps for wider uptake and acceptance. This paper summarises these discussions, highlights the challenges - particularly those identified by industry - and proposes initial steps for moving the field forward. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  5. Using exposure bands for rapid decision making in the ...

    Science.gov (United States)

    The ILSI Health and Environmental Sciences Institute (HESI) Risk Assessment in the 21st Century (RISK21) project was initiated to address and catalyze improvements in human health risk assessment. RISK21 is a problem formulation-based conceptual roadmap and risk matrix visualization tool, facilitating transparent evaluation of both hazard and exposure components. The RISK21 roadmap is exposure-driven, i.e. exposure is used as the second step (after problem formulation) to define and focus the assessment. This paper describes the exposure tiers of the RISK21 matrix and the approaches to adapt readily available information to more quickly inform exposure at a screening level. In particular, exposure look-up tables were developed from available exposure tools (European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Targeted Risk Assessment (TRA) for worker exposure, ECETOC TRA, European Solvents Industry Group (ESIG) Generic Exposure Scenario (GES) Risk and Exposure Tool (EGRET) for consumer exposure, and USEtox for indirect exposure to humans via the environment) were tested in a hypothetical mosquito bed netting case study. A detailed WHO risk assessment for a similar mosquito net use served as a benchmark for the performance of the RISK21 approach. The case study demonstrated that the screening methodologies provided suitable conservative exposure estimates for risk assessment. The results of this effort showed that the RISK21 approach is useful f

  6. Vectors, Change of Basis and Matrix Representation: Onto-Semiotic Approach in the Analysis of Creating Meaning

    Science.gov (United States)

    Montiel, Mariana; Wilhelmi, Miguel R.; Vidakovic, Draga; Elstak, Iwan

    2012-01-01

    In a previous study, the onto-semiotic approach was employed to analyse the mathematical notion of different coordinate systems, as well as some situations and university students' actions related to these coordinate systems in the context of multivariate calculus. This study approaches different coordinate systems through the process of change of…

  7. Chirality dependence of dipole matrix element of carbon nanotubes in axial magnetic field: A third neighbor tight binding approach

    Science.gov (United States)

    Chegel, Raad; Behzad, Somayeh

    2014-02-01

    We have studied the electronic structure and dipole matrix element, D, of carbon nanotubes (CNTs) under magnetic field, using the third nearest neighbor tight binding model. It is shown that the 1NN and 3NN-TB band structures show differences such as the spacing and mixing of neighbor subbands. Applying the magnetic field leads to breaking the degeneracy behavior in the D transitions and creates new allowed transitions corresponding to the band modifications. It is found that |D| is proportional to the inverse tube radius and chiral angle. Our numerical results show that amount of filed induced splitting for the first optical peak is proportional to the magnetic field by the splitting rate ν11. It is shown that ν11 changes linearly and parabolicly with the chiral angle and radius, respectively.

  8. Study of (U,Pu)O2 spent fuel matrix alteration under geological disposal conditions: Experimental approach and geochemical modeling

    International Nuclear Information System (INIS)

    Odorowski, Melina

    2015-01-01

    To assess the performance of direct disposal of spent fuel in a nuclear waste repository, researches are performed on the long-term behavior of spent fuel (UO x and MO x ) under environmental conditions close to those of the French disposal site. The objective of this study is to determine whether the geochemistry of the Callovian-Oxfordian (CO x ) clay geological formation and the steel overpack corrosion (producing iron and hydrogen) have an impact on the oxidative dissolution of the (U,Pu)O 2 matrix under alpha radiolysis of water. Leaching experiments have been performed with UO 2 pellets doped with alpha emitters (Pu) and MIMAS MO x fuel (un-irradiated or spent fuel) to study the effect of the CO x groundwater and of the presence of metallic iron upon the oxidative dissolution of these materials induced by the radiolysis of water. Results indicate an inhibiting effect of the CO x water on the oxidative dissolution. In the presence of iron, two different behaviors are observed. Under alpha irradiation as the one expected in the geological disposal, the alteration of UO 2 matrix and MO x fuel is very strongly inhibited because of the consumption of radiolytic oxidative species by iron in solution leading to the precipitation of Fe(III)-hydroxides on the pellets surface. On the contrary, under a strong beta/gamma irradiation field, alteration tracers indicate that the oxidative dissolution goes on and that uranium concentration in solution is controlled by the solubility of UO 2 (am,hyd). This is explained by the shifting of the redox front from the fuel surface to the bulk solution not protecting the fuel anymore. The developed geochemical (CHESS) and reactive transport (HYTEC) models correctly represent the main results and occurring mechanisms. (author) [fr

  9. A multiple objective test assembly approach for exposure control problems in Computerized Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Theo J.H.M. Eggen

    2010-01-01

    Full Text Available Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has to be maximized, item compromise has to be minimized, and pool usage has to be optimized. In this paper, a multiple objectives method is developed to deal with both types of exposure problems. In this method, exposure control parameters based on observed exposure rates are implemented as weights for the information in the item selection procedure. The method does not need time consuming simulation studies, and it can be implemented conditional on ability level. The method is compared with Sympson Hetter method for exposure control, with the Progressive method and with alphastratified testing. The results show that the method is successful in dealing with both kinds of exposure problems.

  10. Does the Watson-Jones or Modified Smith-Petersen Approach Provide Superior Exposure for Femoral Neck Fracture Fixation?

    Science.gov (United States)

    Lichstein, Paul M; Kleimeyer, John P; Githens, Michael; Vorhies, John S; Gardner, Michael J; Bellino, Michael; Bishop, Julius

    2018-04-24

    A well-reduced femoral neck fracture is more likely to heal than a poorly reduced one, and increasing the quality of the surgical exposure makes it easier to achieve anatomic fracture reduction. Two open approaches are in common use for femoral neck fractures, the modified Smith-Petersen and Watson-Jones; however, to our knowledge, the quality of exposure of the femoral neck exposure provided by each approach has not been investigated. (1) What is the respective area of exposed femoral neck afforded by the Watson-Jones and modified Smith-Petersen approaches? (2) Is there a difference in the ability to visualize and/or palpate important anatomic landmarks provided by the Watson-Jones and modified Smith-Petersen approaches? Ten fresh-frozen human pelvi underwent both modified Smith-Petersen (utilizing the caudal extent of the standard Smith-Petersen interval distal to the anterosuperior iliac spine and parallel to the palpable interval between the tensor fascia lata and the sartorius) and Watson-Jones approaches. Dissections were performed by three fellowship-trained orthopaedic traumatologists with extensive experience in both approaches. Exposure (in cm) was quantified with calibrated digital photographs and specialized software. Modified Smith-Petersen approaches were analyzed before and after rectus femoris tenotomy. The ability to visualize and palpate seven clinically relevant anatomic structures (the labrum, femoral head, subcapital femoral neck, basicervical femoral neck, greater trochanter, lesser trochanter, and medial femoral neck) was also recorded. The quantified area of the exposed proximal femur was utilized to compare which approach afforded the largest field of view of the femoral neck and articular surface for assessment of femoral neck fracture and associated femoral head injury. The ability to visualize and palpate surrounding structures was assessed so that we could better understand which approach afforded the ability to assess structures that

  11. Advanced Computational Approaches for Characterizing Stochastic Cellular Responses to Low Dose, Low Dose Rate Exposures

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bobby, R., Ph.D.

    2003-06-27

    OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on

  12. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  13. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. How to statistically analyze nano exposure measurement results: using an ARIMA time series approach

    International Nuclear Information System (INIS)

    Klein Entink, Rinke H.; Fransman, Wouter; Brouwer, Derk H.

    2011-01-01

    Measurement strategies for exposure to nano-sized particles differ from traditional integrated sampling methods for exposure assessment by the use of real-time instruments. The resulting measurement series is a time series, where typically the sequential measurements are not independent from each other but show a pattern of autocorrelation. This article addresses the statistical difficulties when analyzing real-time measurements for exposure assessment to manufactured nano objects. To account for autocorrelation patterns, Autoregressive Integrated Moving Average (ARIMA) models are proposed. A simulation study shows the pitfalls of using a standard t-test and the application of ARIMA models is illustrated with three real-data examples. Some practical suggestions for the data analysis of real-time exposure measurements conclude this article.

  15. Belief disconfirmation versus habituation approaches to situational exposure in panic disorder with agoraphobia: a pilot study.

    Science.gov (United States)

    Salkovskis, Paul M; Hackmann, Ann; Wells, Adrian; Gelder, Michael G; Clark, David M

    2007-05-01

    Exposure therapy and cognitive behaviour therapy (CBT) are both effective in the treatment of panic disorder with agoraphobia. Cognitive theories suggest that the way in which exposure to avoided situations is implemented in either treatment may be crucial. In particular, it is suggested that clinical improvement will be greatest if opportunities for disconfirmation of feared catastrophes are maximized. In a small pilot study, 16 patients with panic disorder and (moderate or severe) agoraphobia were randomly allocated to either habituation based exposure therapy (HBET) or exposure planned as a belief disconfirmation strategy and accompanied by dropping of safety-seeking behaviours. Both treatments were brief (total of 3.25 h of exposure) and were similar in terms of expectancy of change. Patients in the CBT condition showed significantly greater improvements in self-report measures of anxiety, panic and situational avoidance. They also completed significantly more steps in a standardized behavioural walk, during which they experienced significantly less anxiety. The controlled effect sizes for CBT were substantial (range 1.7-2.7), which suggests it may be a particularly efficient way of managing therapeutic exposure to feared situations in panic disorder with agoraphobia. Further research is needed to clarify the mechanism of change involved.

  16. EXPOSURE TO MASS MEDIA AS A DOMINANT FACTOR INFLUENCING PUBLIC STIGMA TOWARD MENTAL ILLNESS BASED ON SUNRISE MODEL APPROACH

    Directory of Open Access Journals (Sweden)

    Ni Made Sintha Pratiwi

    2018-05-01

    Full Text Available Background: The person suffering mental disorders is not only burdened by his condition but also by the stigma. The impact of stigma extremely influences society that it is considered to be the obstacle in mental disorders therapy. Stigma as the society adverse view toward severe mental disorders is related with the cultural aspect. The interaction appeared from each component of nursing model namely sunrise model, which a model developed by Madeleine Leininger is connected with the wide society views about severe mental disorders condition in society. Objective: The aim of this study was to analyze the factors related to public stigma and to find out the dominant factors related to public stigma about severe mental illness through sunrise model approach in Sukonolo Village, Malang Regency. Methods: This study using observational analytical design with cross sectional approach. There were 150 respondents contributed in this study. The respondents were obtained using purposive sampling technique. Results: The results showed a significant relationship between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma about mental illness. The result from multiple logistic regression shows the low exposure of mass media has the highest OR value at 26.744. Conclusion: There were significant correlation between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma toward mental illness. Mass media exposure as a dominant factor influencing public stigma toward mental illness.

  17. NMR analysis of male fathead minnow urinary metabolites: A potential approach for studying impacts of chemical exposures

    Energy Technology Data Exchange (ETDEWEB)

    Ekman, D.R. [Ecosystems Research Division, U.S. EPA, 960 College Station Road, Athens, GA 30605 (United States)], E-mail: ekman.drew@epa.gov; Teng, Q. [Ecosystems Research Division, U.S. EPA, 960 College Station Road, Athens, GA 30605 (United States); Jensen, K.M.; Martinovic, D.; Villeneuve, D.L.; Ankley, G.T. [Mid-Continent Ecology Division, U.S. EPA, 6201 Congdon Boulevard, Duluth, MN 55804 (United States); Collette, T.W. [Ecosystems Research Division, U.S. EPA, 960 College Station Road, Athens, GA 30605 (United States)

    2007-11-30

    The potential for profiling metabolites in urine from male fathead minnows (Pimephales promelas) to assess chemical exposures was explored using nuclear magnetic resonance (NMR) spectroscopy. Both one-dimensional (1D) and two-dimensional (2D) NMR spectroscopy was used for the assignment of metabolites in urine from unexposed fish. Because fathead minnow urine is dilute, we lyophilized these samples prior to analysis. Furthermore, 1D {sup 1}H NMR spectra of unlyophilized urine from unexposed male fathead minnow and Sprague-Dawley rat were acquired to qualitatively compare rat and fish metabolite profiles and to provide an estimate of the total urinary metabolite pool concentration difference. As a small proof-of-concept study, lyophilized urine samples from male fathead minnows exposed to three different concentrations of the antiandrogen vinclozolin were analyzed by 1D {sup 1}H NMR to assess exposure-induced changes. Through a combination of principal components analysis (PCA) and measurements of {sup 1}H NMR peak intensities, several metabolites were identified as changing with statistical significance in response to exposure. Among those changes occurring in response to exposure to the highest concentration (450 {mu}g/L) of vinclozolin were large increases in taurine, lactate, acetate, and formate. These increases coincided with a marked decrease in hippurate, a combination potentially indicative of hepatotoxicity. The results of these investigations clearly demonstrate the potential utility of an NMR-based approach for assessing chemical exposures in male fathead minnow, using urine collected from individual fish.

  18. NMR analysis of male fathead minnow urinary metabolites: A potential approach for studying impacts of chemical exposures

    International Nuclear Information System (INIS)

    Ekman, D.R.; Teng, Q.; Jensen, K.M.; Martinovic, D.; Villeneuve, D.L.; Ankley, G.T.; Collette, T.W.

    2007-01-01

    The potential for profiling metabolites in urine from male fathead minnows (Pimephales promelas) to assess chemical exposures was explored using nuclear magnetic resonance (NMR) spectroscopy. Both one-dimensional (1D) and two-dimensional (2D) NMR spectroscopy was used for the assignment of metabolites in urine from unexposed fish. Because fathead minnow urine is dilute, we lyophilized these samples prior to analysis. Furthermore, 1D 1 H NMR spectra of unlyophilized urine from unexposed male fathead minnow and Sprague-Dawley rat were acquired to qualitatively compare rat and fish metabolite profiles and to provide an estimate of the total urinary metabolite pool concentration difference. As a small proof-of-concept study, lyophilized urine samples from male fathead minnows exposed to three different concentrations of the antiandrogen vinclozolin were analyzed by 1D 1 H NMR to assess exposure-induced changes. Through a combination of principal components analysis (PCA) and measurements of 1 H NMR peak intensities, several metabolites were identified as changing with statistical significance in response to exposure. Among those changes occurring in response to exposure to the highest concentration (450 μg/L) of vinclozolin were large increases in taurine, lactate, acetate, and formate. These increases coincided with a marked decrease in hippurate, a combination potentially indicative of hepatotoxicity. The results of these investigations clearly demonstrate the potential utility of an NMR-based approach for assessing chemical exposures in male fathead minnow, using urine collected from individual fish

  19. Calculating systems-scale energy efficiency and net energy returns: A bottom-up matrix-based approach

    International Nuclear Information System (INIS)

    Brandt, Adam R.; Dale, Michael; Barnhart, Charles J.

    2013-01-01

    In this paper we expand the work of Brandt and Dale (2011) on ERRs (energy return ratios) such as EROI (energy return on investment). This paper describes a “bottom-up” mathematical formulation which uses matrix-based computations adapted from the LCA (life cycle assessment) literature. The framework allows multiple energy pathways and flexible inclusion of non-energy sectors. This framework is then used to define a variety of ERRs that measure the amount of energy supplied by an energy extraction and processing pathway compared to the amount of energy consumed in producing the energy. ERRs that were previously defined in the literature are cast in our framework for calculation and comparison. For illustration, our framework is applied to include oil production and processing and generation of electricity from PV (photovoltaic) systems. Results show that ERR values will decline as system boundaries expand to include more processes. NERs (net energy return ratios) tend to be lower than GERs (gross energy return ratios). External energy return ratios (such as net external energy return, or NEER (net external energy ratio)) tend to be higher than their equivalent total energy return ratios. - Highlights: • An improved bottom-up mathematical method for computing net energy return metrics is developed. • Our methodology allows arbitrary numbers of interacting processes acting as an energy system. • Our methodology allows much more specific and rigorous definition of energy return ratios such as EROI or NER

  20. Transfer matrix approach to electron transport in monolayer MoS2/MoO x heterostructures

    Science.gov (United States)

    Li, Gen

    2018-05-01

    Oxygen plasma treatment can introduce oxidation into monolayer MoS2 to transfer MoS2 into MoO x , causing the formation of MoS2/MoO x heterostructures. We find the MoS2/MoO x heterostructures have the similar geometry compared with GaAs/Ga1‑x Al x As semiconductor superlattice. Thus, We employ the established transfer matrix method to analyse the electron transport in the MoS2/MoO x heterostructures with double-well and step-well geometries. We also considere the coupling between transverse and longitudinal kinetic energy because the electron effective mass changes spatially in the MoS2/MoO x heterostructures. We find the resonant peaks show red shift with the increasing of transverse momentum, which is similar to the previous work studying the transverse-momentum-dependent transmission in GaAs/Ga1‑x Al x As double-barrier structure. We find electric field can enhance the magnitude of peaks and intensify the coupling between longitudinal and transverse momentums. Moreover, higher bias is applied to optimize resonant tunnelling condition to show negative differential effect can be observed in the MoS2/MoO x system.

  1. Quantum information aspects on bulk and nano interacting Fermi system: A spin-space density matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Afzali, R., E-mail: afzali@kntu.ac.ir [Department of Physics, K. N. Toosi University of Technology, Tehran, 15418 (Iran, Islamic Republic of); Ebrahimian, N., E-mail: n.ebrahimian@shahed.ac.ir [Department of Physics, Faculty of Basic Sciences, Shahed University, Tehran, 18155-159 (Iran, Islamic Republic of); Eghbalifar, B., E-mail: b.eghbali2011@yahoo.com [Department of Agricultural Management, Marvdasht Branch, Azad University, Marvdasht (Iran, Islamic Republic of)

    2016-10-07

    Highlights: • In contrast to a s-wave superconductor, the quantum correlation of the d-wave superconductor is sensitive to the change of the gap magnitude. • Quantum discord of the d-wave superconductor oscillates. • Quantum discord becomes zero at a characteristic length of the d-wave superconductor. • Quantum correlation strongly depends on the length of grain. Length of the superconductor lower, the quantum correlation length higher. • Quantum tripartite entanglement for a nano-scale d-wave superconductor is better than for a bulk d-wave superconductor. - Abstract: By approximating the energy gap, entering nano-size effect via gap fluctuation and calculating the Green's functions and the space-spin density matrix, the dependence of quantum correlation (entanglement, discord and tripartite entanglement) on the relative distance of two electron spins forming Cooper pairs, the energy gap and the length of bulk and nano interacting Fermi system (a nodal d-wave superconductor) is determined. In contrast to a s-wave superconductor, quantum correlation of the system is sensitive to the change of the gap magnitude and strongly depends on the length of the grain. Also, quantum discord oscillates. Furthermore, the entanglement length and the correlation length are investigated. Discord becomes zero at a characteristic length of the d-wave superconductor.

  2. Internal service quality by integrated approach Performance Control Matrix (PCM & Importance-Satisfaction Model (Studied in Yazd Regional Power Company

    Directory of Open Access Journals (Sweden)

    Saeid Peirow

    2016-02-01

    Full Text Available Today, the internal service quality as one of the most important factors affecting the recruitment and retention of staff is considered. The present study sought to examine the internal service quality of Yazd Regional Electric, finally, select appropriate strategies to improve the quality of local services in the organization. The application of this study is base on survey method.Data were collected from questionnaires to evaluate the 26 components of internal service quality of Yazd Regional Electric, has been used. Research community is the staff of the organisation.Also, the sample size, the initial questionnaire was distributed according to Cochran's formula is calculated.In order to analyze research data, the model is important - satisfaction and performance control matrix to identify those components that are used need to be improved.Also, in order to prioritize measures to improve employee satisfaction index is used. Data analysis using above tools show, 8 criteria are in improvment area. So, these criteria are prioritized with ESI.

  3. Theory of open quantum systems with bath of electrons and phonons and spins: many-dissipaton density matrixes approach.

    Science.gov (United States)

    Yan, YiJing

    2014-02-07

    This work establishes a strongly correlated system-and-bath dynamics theory, the many-dissipaton density operators formalism. It puts forward a quasi-particle picture for environmental influences. This picture unifies the physical descriptions and algebraic treatments on three distinct classes of quantum environments, electron bath, phonon bath, and two-level spin or exciton bath, as their participating in quantum dissipation processes. Dynamical variables for theoretical description are no longer just the reduced density matrix for system, but remarkably also those for quasi-particles of bath. The present theoretical formalism offers efficient and accurate means for the study of steady-state (nonequilibrium and equilibrium) and real-time dynamical properties of both systems and hybridizing environments. It further provides universal evaluations, exact in principle, on various correlation functions, including even those of environmental degrees of freedom in coupling with systems. Induced environmental dynamics could be reflected directly in experimentally measurable quantities, such as Fano resonances and quantum transport current shot noise statistics.

  4. An Innovative Electrolysis Approach for the Synthesis of Metal Matrix Bulk Nanocomposites: A Case Study on Copper-Niobium System

    Science.gov (United States)

    Shokrvash, Hussein; Rad, Rahim Yazdani; Massoudi, Abouzar

    2018-04-01

    Design and synthesis of a prototype Cu-Nb nanocomposite are presented. Oxygen-free Cu-Nb nanocomposites were prepared using an electrolysis facility with special emphasis on the cathodic deoxidation of Cu and nanometric Nb2O5 blends in a molten NaCl-CaCl2 electrolyte. The as-prepared nanocomposites were characterized by X-ray diffraction and energy-dispersive X-ray spectroscopy. The elemental analysis of the Cu matrix and Nb phase revealed the high solubility of Nb in the Cu structure (0.85 at. pct) and Cu in the Nb structure (10.59 at. pct) over short synthesis times (4-5 hours). Furthermore, precise analysis using field emission scanning electron microscopy and transmission electron microscopy confirmed the unique structure and nanocomposite morphology of the Cu-Nb nanocomposite. The successful synthesis of Cu-Nb nanocomposites offers a new conceptual and empirical outlook on the generation of bulk nanostructures of immiscible bimetals using electro-synthesis.

  5. Elementary matrix theory

    CERN Document Server

    Eves, Howard

    1980-01-01

    The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri

  6. [The "window" surgical exposure strategy of the upper anterior cervical retropharyngeal approach for anterior decompression at upper cervical spine].

    Science.gov (United States)

    Wu, Xiang-Yang; Zhang, Zhe; Wu, Jian; Lü, Jun; Gu, Xiao-Hui

    2009-11-01

    To investigate the "window" surgical exposure strategy of the upper anterior cervical retropharyngeal approach for the exposure and decompression and instrumentation of the upper cervical spine. From Jan. 2000 to July 2008, 5 patients with upper cervical spinal injuries were treated by surgical operation included 4 males and 1 female with and average age of 35 years old ranging from 16 to 68 years. There were 2 cases of Hangman's fractures (type II ), 2 of C2.3 intervertebral disc displacement and 1 of C2 vertebral body tuberculosis. All patients underwent the upper cervical anterior retropharyngeal approach through the "window" between the hypoglossal nerve and the superior laryngeal nerve and pharynx and carotid artery. Two patients of Hangman's fractures underwent the C2,3 intervertebral disc discectomy, bone graft fusion and internal fixation. Two patients of C2,3 intervertebral disc displacement underwent the C2,3 intervertebral disc discectomy, decompression bone graft fusion and internal fixation. One patient of C2 vertebral body tuberculosis was dissected and resected and the focus and the cavity was filled by bone autografting. C1 anterior arch to C3 anterior vertebral body were successful exposed. Lesion resection or decompression and fusion were successful in all patients. All patients were followed-up for from 5 to 26 months (means 13.5 months). There was no important vascular and nerve injury and no wound infection. Neutral symptoms was improved and all patient got successful fusion. The "window" surgical exposure surgical technique of the upper cervical anterior retropharyngeal approach is a favorable strategy. This approach strategy can be performed with full exposure for C1-C3 anterior anatomical structure, and can get minimally invasive surgery results and few and far between wound complication, that is safe if corresponding experience is achieved.

  7. Approaches for the development of occupational exposure limits for man-made mineral fibres (MMMFs)

    International Nuclear Information System (INIS)

    Ziegler-Skylakakis, Kyriakoula

    2004-01-01

    Occupational exposure limits (OELs) are an essential tool in the control of exposure to hazardous chemical agents, and serve to minimise the occurrence of occupational diseases associated with such exposure. The setting of OELs, together with other associated measures, forms an essential part of the European Community's strategy on health and safety at work, upon which the legislative framework for the protection of workers from risks related to chemical agents is based. The European Commission is assisted by the Scientific Committee on Occupational Exposure Limits (SCOEL) in its work of setting OELs for hazardous chemical agents. The procedure for setting OELs requires information on the toxic mechanisms of an agent that should allow to differentiate between thresholded and non-thresholded mechanisms. In the first case, a no-observed adverse effect level (NOAEL) can be defined, which can be the basis for a derivation of an OEL. In the latter case, any exposure is correlated with a certain risk. If adequate scientific data are available, SCOEL estimates the risk associated with a series of exposure levels. This can then be used for guidance, when setting OELs at European level. Man-made mineral fibres (MMMFs) are widely used at different worksites. MMMF products can release airborne respirable fibres during their production, use and removal. According to the classification of the EU system, all MMMF fibres are considered to be irritants and are classified for carcinogenicity. EU legislation foresees the use of limit values as one of the provisions for the protection of workers from the risks related to exposure to carcinogens. In the following paper, the research requirements identified by SCOEL for the development of OELs for MMMFs will be presented

  8. A quantitative screening-level approach to incorporate chemical exposure and risk into alternative assessment evaluations.

    Science.gov (United States)

    Arnold, Scott M; Greggs, Bill; Goyak, Katy O; Landenberger, Bryce D; Mason, Ann M; Howard, Brett; Zaleski, Rosemary T

    2017-11-01

    As the general public and retailers ask for disclosure of chemical ingredients in the marketplace, a number of hazard screening tools were developed to evaluate the so-called "greenness" of individual chemical ingredients and/or formulations. The majority of these tools focus only on hazard, often using chemical lists, ignoring the other part of the risk equation: exposure. Using a hazard-only focus can result in regrettable substitutions, changing 1 chemical ingredient for another that turns out to be more hazardous or shifts the toxicity burden to others. To minimize the incidents of regrettable substitutions, BizNGO describes "Common Principles" to frame a process for informed substitution. Two of these 6 principles are: "reduce hazard" and "minimize exposure." A number of frameworks have emerged to evaluate and assess alternatives. One framework developed by leading experts under the auspices of the US National Academy of Sciences recommended that hazard and exposure be specifically addressed in the same step when assessing candidate alternatives. For the alternative assessment community, this article serves as an informational resource for considering exposure in an alternatives assessment using elements of problem formulation; product identity, use, and composition; hazard analysis; exposure analysis; and risk characterization. These conceptual elements build on practices from government, academia, and industry and are exemplified through 2 hypothetical case studies demonstrating the questions asked and decisions faced in new product development. These 2 case studies-inhalation exposure to a generic paint product and environmental exposure to a shampoo rinsed down the drain-demonstrate the criteria, considerations, and methods required to combine exposure models addressing human health and environmental impacts to provide a screening level hazard and exposure (risk) analysis. This article informs practices for these elements within a comparative risk context

  9. Spatial Polygamy and Contextual Exposures (SPACEs): Promoting Activity Space Approaches in Research on Place and Health

    Science.gov (United States)

    Matthews, Stephen A.; Yang, Tse-Chuan

    2014-01-01

    Exposure science has developed rapidly and there is an increasing call for greater precision in the measurement of individual exposures across space and time. Social science interest in an individual’s environmental exposure, broadly conceived, has arguably been quite limited conceptually and methodologically. Indeed, we appear to lag behind our exposure science colleagues in our theories, data, and methods. In this paper we discuss a framework based on the concept of spatial polygamy to demonstrate the need to collect new forms of data on human spatial behavior and contextual exposures across time and space. Adopting new data and methods will be essential if we want to better understand social inequality in terms of exposure to health risks and access to health resources. We discuss the opportunities and challenges focusing on the potential seemingly offered by focusing on human mobility, and specifically the utilization of activity space concepts and data. A goal of the paper is to spatialize social and health science concepts and research practice vis-a-vis the complexity of exposure. The paper concludes with some recommendations for future research focusing on theoretical and conceptual development, promoting research on new types of places and human movement, the dynamic nature of contexts, and on training. “When we elect wittingly or unwittingly, to work within a level … we tend to discern or construct – whichever emphasis you prefer – only those kinds of systems whose elements are confined to that level.”Otis Dudley Duncan (1961, p. 141). “…despite the new ranges created by improved transportation, local government units have tended to remain medieval in size.”Torsten Hägerstrand (1970, p.18) “A detective investigating a crime needs both tools and understanding. If he has no fingerprint powder, he will fail to find fingerprints on most surfaces. If he does not understand where the criminal is likely to have put his fingers, he will not

  10. Occupational exposures during abdominal fluoroscopically guided interventional procedures for different patient sizes - A Monte Carlo approach.

    Science.gov (United States)

    Santos, William S; Belinato, Walmir; Perini, Ana P; Caldas, Linda V E; Galeano, Diego C; Santos, Carla J; Neves, Lucio P

    2018-01-01

    In this study we evaluated the occupational exposures during an abdominal fluoroscopically guided interventional radiology procedure. We investigated the relation between the Body Mass Index (BMI), of the patient, and the conversion coefficient values (CC) for a set of dosimetric quantities, used to assess the exposure risks of medical radiation workers. The study was performed using a set of male and female virtual anthropomorphic phantoms, of different body weights and sizes. In addition to these phantoms, a female and a male phantom, named FASH3 and MASH3 (reference virtual anthropomorphic phantoms), were also used to represent the medical radiation workers. The CC values, obtained as a function of the dose area product, were calculated for 87 exposure scenarios. In each exposure scenario, three phantoms, implemented in the MCNPX 2.7.0 code, were simultaneously used. These phantoms were utilized to represent a patient and medical radiation workers. The results showed that increasing the BMI of the patient, adjusted for each patient protocol, the CC values for medical radiation workers decrease. It is important to note that these results were obtained with fixed exposure parameters. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Effect of Prior Exposure at Elevated Temperatures on Tensile Properties and Stress-Strain Behavior of Three Oxide/Oxide Ceramic Matrix Composites

    Science.gov (United States)

    2015-03-26

    observations on the fracture surface using an optical microscope and SEM. 4 II. Background 2.1 Ceramics Ceramics are inorganic and nonmetallic... The original uses for ceramic were primarily decorative, until more utilitarian purposes were discovered. Pottery was developed around 9,000...OF THREE OXIDE/OXIDE CERAMIC MATRIX COMPOSITES THESIS Christopher J. Hull, Captain, USAF AFIT-ENY-MS-15-M-228 DEPARTMENT OF THE AIR FORCE

  12. A dynamic activity-based population modelling approach to evaluate exposure to air pollution: Methods and application to a Dutch urban area

    International Nuclear Information System (INIS)

    Beckx, Carolien; Int Panis, Luc; Arentze, Theo; Janssens, Davy; Torfs, Rudi; Broekx, Steven; Wets, Geert

    2009-01-01

    Recent air quality studies have highlighted that important differences in pollutant concentrations can occur over the day and between different locations. Traditional exposure analyses, however, assume that people are only exposed to pollution at their place of residence. Activity-based models, which recently have emerged from the field of transportation research, offer a technique to micro-simulate activity patterns of a population with a high resolution in space and time. Due to their characteristics, this model can be applied to establish a dynamic exposure assessment to air pollution. This paper presents a new exposure methodology, using a micro-simulator of activity-travel behaviour, to develop a dynamic exposure assessment. The methodology is applied to a Dutch urban area to demonstrate the advantages of the approach for exposure analysis. The results for the exposure to PM 10 and PM 2.5 , air pollutants considered as hazardous for human health, reveal large differences between the static and the dynamic approach, mainly due to an underestimation of the number of hours spent in the urban region by the static method. We can conclude that this dynamic population modelling approach is an important improvement over traditional methods and offers a new and more sensitive way for estimating population exposure to air pollution. In the light of the new European directive, aimed at reducing the exposure of the population to PM 2.5 , this new approach contributes to a much more accurate exposure assessment that helps evaluate policies to reduce public exposure to air pollution

  13. Inverse modeling of rainfall infiltration with a dual permeability approach using different matrix-fracture coupling variants.

    Science.gov (United States)

    Blöcher, Johanna; Kuraz, Michal

    2017-04-01

    In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.

  14. The theoretical study of passive and active optical devices via planewave based transfer (scattering) matrix method and other approaches

    Energy Technology Data Exchange (ETDEWEB)

    Zhuo, Ye [Iowa State Univ., Ames, IA (United States)

    2011-01-01

    In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed in advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this

  15. Quantification of the lung cancer risk from radon daughter exposure in dwellings - an epidemiological approach

    International Nuclear Information System (INIS)

    Edling, C.; Wingren, G.; Axelson, O.

    1986-01-01

    Some epidemiological studies have suggested a relationship between the concentration of decay products from radon, i.e., radon daughter exposure, in dwellings and lung cancer. Further experiences made from radon measurements have indicated that both building material and particularly the radioactivity in the ground is of importance for the leakage of radon into the houses. In Sweden, a survey is now ongoing in 15 municipalities with alum shale deposits, and in one area a case-referent evaluation has been made, considering building materials, ground conditions and smoking habits. The size of the study is small, but the results suggest that a risk is at hand and that there is a multiplicative effect from smoking and radon daughter exposure. About 30% of the lung cancers in the studied population might be attributable to elevated and potentially avoidable exposure to radon and radon daughters. (author)

  16. Secondhand smoke exposure and other correlates of susceptibility to smoking: a propensity score matching approach.

    Science.gov (United States)

    McIntire, Russell K; Nelson, Ashlyn A; Macy, Jonathan T; Seo, Dong-Chul; Kolbe, Lloyd J

    2015-09-01

    Secondhand smoke (SHS) exposure is responsible for numerous diseases of the lungs and other bodily systems among children. In addition to the adverse health effects of SHS exposure, studies show that children exposed to SHS are more likely to smoke in adolescence. Susceptibility to smoking is a measure used to identify adolescent never-smokers who are at risk for smoking. Limited research has been conducted on the influence of SHS on susceptibility to smoking. The purpose of this study was to determine a robust measure of the strength of correlation between SHS exposure and susceptibility to smoking among never-smoking U.S. adolescents. This study used data from the 2009 National Youth Tobacco Survey to identify predictors of susceptibility to smoking in the full (pre-match) sample of adolescents and a smaller (post-match) sample created by propensity score matching. Results showed a significant association between SHS exposure and susceptibility to smoking among never-smoking adolescents in the pre-match (OR=1.47) and post-match (OR=1.52) samples. The odds ratio increase after matching suggests that the strength of the relationship was underestimated in the pre-match sample. Other significant correlates of susceptibility to smoking identified include: gender, race/ethnicity, personal income, smoke-free home rules, number of smoking friends, perception of SHS harm, perceived benefits of smoking, and exposure to pro-tobacco media messages. The use of propensity score matching procedures reduced bias in the post-match sample, and provided a more robust estimate of the influence of SHS exposure on susceptibility to smoking, compared to the pre-match sample estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  18. Understanding the drug release mechanism from a montmorillonite matrix and its binary mixture with a hydrophilic polymer using a compartmental modelling approach

    Science.gov (United States)

    Choiri, S.; Ainurofiq, A.

    2018-03-01

    Drug release from a montmorillonite (MMT) matrix is a complex mechanism controlled by swelling mechanism of MMT and an interaction of drug and MMT. The aim of this research was to explain a suitable model of the drug release mechanism from MMT and its binary mixture with a hydrophilic polymer in the controlled release formulation based on a compartmental modelling approach. Theophylline was used as a drug model and incorporated into MMT and a binary mixture with hydroxyl propyl methyl cellulose (HPMC) as a hydrophilic polymer, by a kneading method. The dissolution test was performed and the modelling of drug release was assisted by a WinSAAM software. A 2 model was purposed based on the swelling capability and basal spacing of MMT compartments. The model evaluation was carried out to goodness of fit and statistical parameters and models were validated by a cross-validation technique. The drug release from MMT matrix regulated by a burst release mechanism of unloaded drug, swelling ability, basal spacing of MMT compartment, and equilibrium between basal spacing and swelling compartments. Furthermore, the addition of HPMC in MMT system altered the presence of swelling compartment and equilibrium between swelling and basal spacing compartment systems. In addition, a hydrophilic polymer reduced the burst release mechanism of unloaded drug.

  19. An Integrated Approach to Assess the Role of Chemical Exposure in Obesity

    NARCIS (Netherlands)

    Legler, J.

    2013-01-01

    The evidence that developmental exposure of humans to chemicals plays a role in onset of obesity is convincing, yet controversial as it challenges traditional views on the etiology of obesity. OBELIX, one of the largest pan-European studies researching the obesogen hypothesis, is accruing

  20. Bio-monitoring of mycotoxin exposure in Cameroon using a urinary multi-biomarker approach.

    Science.gov (United States)

    Abia, Wilfred A; Warth, Benedikt; Sulyok, Michael; Krska, Rudolf; Tchana, Angele; Njobeh, Patrick B; Turner, Paul C; Kouanfack, Charles; Eyongetah, Mbu; Dutton, Mike; Moundipa, Paul F

    2013-12-01

    Bio-monitoring of human exposure to mycotoxin has mostly been limited to a few individually measured mycotoxin biomarkers. This study aimed to determine the frequency and level of exposure to multiple mycotoxins in human urine from Cameroonian adults. 175 Urine samples (83% from HIV-positive individuals) and food frequency questionnaire responses were collected from consenting Cameroonians, and analyzed for 15 mycotoxins and relevant metabolites using LC-ESI-MS/MS. In total, eleven analytes were detected individually or in combinations in 110/175 (63%) samples including the biomarkers aflatoxin M1, fumonisin B1, ochratoxin A and total deoxynivalenol. Additionally, important mycotoxins and metabolites thereof, such as fumonisin B2, nivalenol and zearalenone, were determined, some for the first time in urine following dietary exposures. Multi-mycotoxin contamination was common with one HIV-positive individual exposed to five mycotoxins, a severe case of co-exposure that has never been reported in adults before. For the first time in Africa or elsewhere, this study quantified eleven mycotoxin biomarkers and bio-measures in urine from adults. For several mycotoxins estimates indicate that the tolerable daily intake is being exceeded in this study population. Given that many mycotoxins adversely affect the immune system, future studies will examine whether combinations of mycotoxins negatively impact Cameroonian population particularly immune-suppressed individuals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. A Geographic Approach to Modelling Human Exposure to Traffic Air Pollution using GIS

    DEFF Research Database (Denmark)

    Jensen, S. S.

    on gender and age from the Central Population Register (CPR); the number of employees from the Central Business Register (CER); standardised time-activity profiles for the different age groups in the residence and workplace microenvironments; and meteorological parameters (hourly). The exposure model...

  2. Enhancing Exposure and Response Prevention for OCD: A Couple-Based Approach

    Science.gov (United States)

    Abramowitz, Jonathan S.; Baucom, Donald H.; Wheaton, Michael G.; Boeding, Sara; Fabricant, Laura E.; Paprocki, Christine; Fischer, Melanie S.

    2013-01-01

    The effectiveness of individual therapy by exposure and response prevention (ERP) for obsessive-compulsive disorder (OCD) is well established, yet not all patients respond well, and some show relapse on discontinuation. This article begins by providing an overview of the personal and interpersonal experiences of OCD, focusing on interpersonal…

  3. Understanding the Impact of Trauma Exposure on Posttraumatic Stress Symptomatology: A Structural Equation Modeling Approach

    Science.gov (United States)

    Chen, Wei; Wang, Long; Zhang, Xing-Li; Shi, Jian-Nong

    2012-01-01

    The objective of this study was to investigate the impact of trauma exposure on the posttraumatic stress symptomatology (PTSS) of children who resided near the epicenter of the 2008 Wenchuan earthquake. The mechanisms of this impact were explored via structural equation models with self-esteem and coping strategies included as mediators. The…

  4. Nursing research in community-based approaches to reduce exposure to secondhand smoke.

    Science.gov (United States)

    Hahn, Ellen J; Ashford, Kristin B; Okoli, Chizimuzo T C; Rayens, Mary Kay; Ridner, S Lee; York, Nancy L

    2009-01-01

    Secondhand smoke (SHS) is the third leading cause of preventable death in the United States and a major source of indoor air pollution, accounting for an estimated 53,000 deaths per year among nonsmokers. Secondhand smoke exposure varies by gender, race/ethnicity, and socioeconomic status. The most effective public health intervention to reduce SHS exposure is to implement and enforce smoke-free workplace policies that protect entire populations including all workers regardless of occupation, race/ethnicity, gender, age, and socioeconomic status. This chapter summarizes community and population-based nursing research to reduce SHS exposure. Most of the nursing research in this area has been policy outcome studies, documenting improvement in indoor air quality, worker's health, public opinion, and reduction in Emergency Department visits for asthma, acute myocardial infarction among women, and adult smoking prevalence. These findings suggest a differential health effect by strength of law. Further, smoke-free laws do not harm business or employee turnover, nor are revenues from charitable gaming affected. Additionally, smoke-free laws may eventually have a positive effect on cessation among adults. There is emerging nursing science exploring the link between SHS exposure to nicotine and tobacco dependence, suggesting one reason that SHS reduction is a quit smoking strategy. Other nursing research studies address community readiness for smoke-free policy, and examine factors that build capacity for smoke-free policy. Emerging trends in the field include tobacco free health care and college campuses. A growing body of nursing research provides an excellent opportunity to conduct and participate in community and population-based research to reduce SHS exposure for both vulnerable populations and society at large.

  5. Identification of Proteins with Potential Osteogenic Activity Present in the Water-Soluble Matrix Proteins from Crassostrea gigas Nacre Using a Proteomic Approach

    Directory of Open Access Journals (Sweden)

    Daniel V. Oliveira

    2012-01-01

    Full Text Available Nacre, when implanted in vivo in bones of dogs, sheep, mice, and humans, induces a biological response that includes integration and osteogenic activity on the host tissue that seems to be activated by a set of proteins present in the nacre water-soluble matrix (WSM. We describe here an experimental approach that can accurately identify the proteins present in the WSM of shell mollusk nacre. Four proteins (three gigasin-2 isoforms and a cystatin A2 were for the first time identified in WSM of Crassostrea gigas nacre using 2DE and LC-MS/MS for protein identification. These proteins are thought to be involved in bone remodeling processes and could be responsible for the biocompatibility shown between bone and nacre grafts. These results represent a contribution to the study of shell biomineralization process and opens new perspectives for the development of new nacre biomaterials for orthopedic applications.

  6. A combined approach of enamel matrix derivative gel and autogenous bone grafts in treatment of intrabony periodontal defects. A case report.

    Science.gov (United States)

    Leung, George; Jin, Lijian

    2003-04-01

    Enamel matrix derivative (EMD) has recently been introduced as a new modality in regenerative periodontal therapy. This case report demonstrates a combined approach in topical application of EMD gel (Emdogain) and autogenous bone grafts for treatment of intrabony defects and furcation involvement defects in a patient with chronic periodontitis. The seven-month post-surgery clinical and radiographic results were presented. The combined application of EMD gel with autogenous bone grafts in intrabony osseous defects resulted in clinically significant gain of attachment on diseased root surfaces and bone fill on radiographs. Further controlled clinical studies are required to confirm the long-term effectiveness of the combination of EMD gel and autogenous bone grafts in treatment of various osseous defects in subjects with chronic periodontitis.

  7. Risks for the development of outcomes related to occupational allergies: an application of the asthma-specific job exposure matrix compared with self-reports and investigator scores on job-training-related exposure.

    OpenAIRE

    Suarthana, E.; Heederik, D.J.J.; Ghezzo, H.; Malo, J.L.; Kennedy, S.M.; Gautrin, D.

    2009-01-01

    BACKGROUND AND AIM: Risks for development of occupational sensitisation, bronchial hyper-responsiveness, rhinoconjunctival and chest symptoms at work associated with continued exposure to high molecular weight (HMW) allergens were estimated with three exposure assessment methods. METHODS: A Cox regression analysis with adjustment for atopy and smoking habit was carried out in 408 apprentices in animal health technology, pastry making, and dental hygiene technology with an 8-year follow-up aft...

  8. Effect of Prior Exposure at Elevated Temperatures on Tensile Properties and Stress-Strain Behavior of Four Non-Oxide Ceramic Matrix Composites

    Science.gov (United States)

    2015-06-18

    Ceramics, San Diego, CA, manufactured the SiC/SiNC and C/SiC composites using polymer infiltration and pyrolysis (PIP). The C/HYPR-SiC™ and SiC/HYPR- SiC...research. Thank you to Dr. Kristin Keller (AFRL/RXCCM), Ms. Jennifer Pierce (AFRL/RXCM), Mr. Randall Corns (AFRL/RXCCM), and Dr. Kathleen Shugart (AFRL...with Hi-Nicalon™ SiC fibers in a SiNC matrix derived by polymer infiltration and pyrolysis (PIP) (manufactured by COI Ceramics, San Diego, CA

  9. INERT-MATRIX FUEL: ACTINIDE ''BURNING'' AND DIRECT DISPOSAL

    International Nuclear Information System (INIS)

    Rodney C. Ewing; Lumin Wang

    2002-01-01

    Excess actinides result from the dismantlement of nuclear weapons (Pu) and the reprocessing of commercial spent nuclear fuel (mainly 241 Am, 244 Cm and 237 Np). In Europe, Canada and Japan studies have determined much improved efficiencies for burnup of actinides using inert-matrix fuels. This innovative approach also considers the properties of the inert-matrix fuel as a nuclear waste form for direct disposal after one-cycle of burn-up. Direct disposal can considerably reduce cost, processing requirements, and radiation exposure to workers

  10. Dietary Exposure Assessment of Danish Consumers to Dithiocarbamate Residues in Food: a Comparison of the Deterministic and Probabilistic Approach

    DEFF Research Database (Denmark)

    Jensen, Bodil Hamborg; Andersen, Jens Hinge; Petersen, Annette

    2008-01-01

    Probabilistic and deterministic estimates of the acute and chronic exposure of the Danish populations to dithiocarbamate residues were performed. The Monte Carlo Risk Assessment programme (MCRA 4.0) was used for the probabilistic risk assessment. Food consumption data were obtained from...... the nationwide dietary survey conducted in 2000-02. Residue data for 5721 samples from the monitoring programme conducted in the period 1998-2003 were used for dithiocarbamates, which had been determined as carbon disulphide. Contributions from 26 commodities were included in the calculations. Using...... the probabilistic approach, the daily acute intakes at the 99.9% percentile for adults and children were 11.2 and 28.2 mu g kg(-1) body weight day(-1), representing 5.6% and 14.1% of the ARfD for maneb, respectively. When comparing the point estimate approach with the probabilistic approach, the outcome...

  11. Forecasting human exposure to atmospheric pollutants in Portugal - A modelling approach

    Science.gov (United States)

    Borrego, C.; Sá, E.; Monteiro, A.; Ferreira, J.; Miranda, A. I.

    2009-12-01

    Air pollution has become one main environmental concern because of its known impact on human health. Aiming to inform the population about the air they are breathing, several air quality modelling systems have been developed and tested allowing the assessment and forecast of air pollution ambient levels in many countries. However, every day, an individual is exposed to different concentrations of atmospheric pollutants as he/she moves from and to different outdoor and indoor places (the so-called microenvironments). Therefore, a more efficient way to prevent the population from the health risks caused by air pollution should be based on exposure rather than air concentrations estimations. The objective of the present study is to develop a methodology to forecast the human exposure of the Portuguese population based on the air quality forecasting system available and validated for Portugal since 2005. Besides that, a long-term evaluation of human exposure estimates aims to be obtained using one-year of this forecasting system application. Additionally, a hypothetical 50% emission reduction scenario has been designed and studied as a contribution to study emission reduction strategies impact on human exposure. To estimate the population exposure the forecasting results of the air quality modelling system MM5-CHIMERE have been combined with the population spatial distribution over Portugal and their time-activity patterns, i.e. the fraction of the day time spent in specific indoor and outdoor places. The population characterization concerning age, work, type of occupation and related time spent was obtained from national census and available enquiries performed by the National Institute of Statistics. A daily exposure estimation module has been developed gathering all these data and considering empirical indoor/outdoor relations from literature to calculate the indoor concentrations in each one of the microenvironments considered, namely home, office/school, and other

  12. MDI Biological Laboratory Arsenic Summit: Approaches to Limiting Human Exposure to Arsenic

    OpenAIRE

    Stanton, Bruce A.

    2015-01-01

    This report is the outcome of the meeting: “Environmental and Human Health Consequences of Arsenic”, held at the MDI Biological Laboratory in Salisbury Cove, Maine, August 13–15, 2014. Human exposure to arsenic represents a significant health problem worldwide that requires immediate attention according to the World Health Organization (WHO). One billion people are exposed to arsenic in food and more than 200 million people ingest arsenic via drinking water at concentrations greater than inte...

  13. Influence of exposure to pesticides on telomere length in tobacco farmers: A biology system approach

    Energy Technology Data Exchange (ETDEWEB)

    Kahl, Vivian Francília Silva [Laboratory of Genetic Toxicology, PPGBioSaúde and PPGGTA, Lutheran University of Brazil (ULBRA), Canoas, RS (Brazil); Silva, Juliana da, E-mail: juliana.silva@ulbra.br [Laboratory of Genetic Toxicology, PPGBioSaúde and PPGGTA, Lutheran University of Brazil (ULBRA), Canoas, RS (Brazil); Rabaioli da Silva, Fernanda, E-mail: fernanda.silva@unilasalle.edu.br [Master’s Degree in Environmental Impact Evaluation, Centro Universitário La Salle, Canoas, RS (Brazil)

    2016-09-15

    Highlights: • Exposure to pesticides in tobacco fields is related to shorten telomere length. • The molecular mechanism of pesticide on telomere length is not fully understood. • Pesticides inhibit ubiquitin proteasome system. • Nicotine activates ubiquitin proteasome system. • Pesticides and nicotine regulate telomere length. - Abstract: Various pesticides in the form of mixtures must be used to keep tobacco crops pest-free. Recent studies have shown a link between occupational exposure to pesticides in tobacco crops and increased damage to the DNA, mononuclei, nuclear buds and binucleated cells in buccal cells as well as micronuclei in lymphocytes. Furthermore, pesticides used specifically for tobacco crops shorten telomere length (TL) significantly. However, the molecular mechanism of pesticide action on telomere length is not fully understood. Our study evaluated the interaction between a complex mixture of chemical compounds (tobacco cultivation pesticides plus nicotine) and proteins associated with maintaining TL, as well as the biological processes involved in this exposure by System Biology tools to provide insight regarding the influence of pesticide exposure on TL maintenance in tobacco farmers. Our analysis showed that one cluster was associated with TL proteins that act in bioprocesses such as (i) telomere maintenance via telomere lengthening; (ii) senescence; (iii) age-dependent telomere shortening; (iv) DNA repair (v) cellular response to stress and (vi) regulation of proteasome ubiquitin-dependent protein catabolic process. We also describe how pesticides and nicotine regulate telomere length. In addition, pesticides inhibit the ubiquitin proteasome system (UPS) and consequently increase proteins of the shelterin complex, avoiding the access of telomerase in telomere and, nicotine activates UPS mechanisms and promotes the degradation of human telomerase reverse transcriptase (hTERT), decreasing telomerase activity.

  14. Influence of exposure to pesticides on telomere length in tobacco farmers: A biology system approach

    International Nuclear Information System (INIS)

    Kahl, Vivian Francília Silva; Silva, Juliana da; Rabaioli da Silva, Fernanda

    2016-01-01

    Highlights: • Exposure to pesticides in tobacco fields is related to shorten telomere length. • The molecular mechanism of pesticide on telomere length is not fully understood. • Pesticides inhibit ubiquitin proteasome system. • Nicotine activates ubiquitin proteasome system. • Pesticides and nicotine regulate telomere length. - Abstract: Various pesticides in the form of mixtures must be used to keep tobacco crops pest-free. Recent studies have shown a link between occupational exposure to pesticides in tobacco crops and increased damage to the DNA, mononuclei, nuclear buds and binucleated cells in buccal cells as well as micronuclei in lymphocytes. Furthermore, pesticides used specifically for tobacco crops shorten telomere length (TL) significantly. However, the molecular mechanism of pesticide action on telomere length is not fully understood. Our study evaluated the interaction between a complex mixture of chemical compounds (tobacco cultivation pesticides plus nicotine) and proteins associated with maintaining TL, as well as the biological processes involved in this exposure by System Biology tools to provide insight regarding the influence of pesticide exposure on TL maintenance in tobacco farmers. Our analysis showed that one cluster was associated with TL proteins that act in bioprocesses such as (i) telomere maintenance via telomere lengthening; (ii) senescence; (iii) age-dependent telomere shortening; (iv) DNA repair (v) cellular response to stress and (vi) regulation of proteasome ubiquitin-dependent protein catabolic process. We also describe how pesticides and nicotine regulate telomere length. In addition, pesticides inhibit the ubiquitin proteasome system (UPS) and consequently increase proteins of the shelterin complex, avoiding the access of telomerase in telomere and, nicotine activates UPS mechanisms and promotes the degradation of human telomerase reverse transcriptase (hTERT), decreasing telomerase activity.

  15. Development of Urinary Biomarkers for Internal Exposure by Cesium-137 Using a Metabolomics Approach in Mice

    Science.gov (United States)

    Goudarzi, Maryam; Weber, Waylon; Mak, Tytus D.; Chung, Juijung; Doyle-Eisele, Melanie; Melo, Dunstana; Brenner, David J.; Guilmette, Raymond A.; Fornace, Albert J.

    2014-01-01

    Cesium-137 is a fission product of uranium and plutonium in nuclear reactors and is released in large quantities during nuclear explosions or detonation of an improvised device containing this isotope. This environmentally persistent radionuclide undergoes radioactive decay with the emission of beta particles as well as gamma radiation. Exposure to 137Cs at high doses can cause acute radiation sickness and increase risk for cancer and death. The serious health risks associated with 137Cs exposure makes it critical to understand how it affects human metabolism and whether minimally invasive and easily accessible samples such as urine and serum can be used to triage patients in case of a nuclear disaster or a radiologic event. In this study, we have focused on establishing a time-dependent metabolomic profile for urine collected from mice injected with 137CsCl. The samples were collected from control and exposed mice on days 2, 5, 20 and 30 after injection. The samples were then analyzed by ultra-performance liquid chromatography coupled to time-of-flight mass spectrometry (UPLC/TOFMS) and processed by an array of informatics and statistical tools. A total of 1,412 features were identified in ESI+ and ESI− modes from which 200 were determined to contribute significantly to the separation of metabolomic profiles of controls from those of the different treatment time points. The results of this study highlight the ease of use of the UPLC/TOFMS platform in finding urinary biomarkers for 137Cs exposure. Pathway analysis of the statistically significant metabolites suggests perturbations in several amino acid and fatty acid metabolism pathways. The results also indicate that 137Cs exposure causes: similar changes in the urinary excretion levels of taurine and citrate as seen with external-beam gamma radiation; causes no attenuation in the levels of hexanoylglycine and N-acetylspermidine; and has unique effects on the levels of isovalerylglycine and tiglylglycine. PMID

  16. Advances on a Decision Analytic Approach to Exposure-Based Chemical Prioritization.

    Science.gov (United States)

    Wood, Matthew D; Plourde, Kenton; Larkin, Sabrina; Egeghy, Peter P; Williams, Antony J; Zemba, Valerie; Linkov, Igor; Vallero, Daniel A

    2018-05-11

    The volume and variety of manufactured chemicals is increasing, although little is known about the risks associated with the frequency and extent of human exposure to most chemicals. The EPA and the recent signing of the Lautenberg Act have both signaled the need for high-throughput methods to characterize and screen chemicals based on exposure potential, such that more comprehensive toxicity research can be informed. Prior work of Mitchell et al. using multicriteria decision analysis tools to prioritize chemicals for further research is enhanced here, resulting in a high-level chemical prioritization tool for risk-based screening. Reliable exposure information is a key gap in currently available engineering analytics to support predictive environmental and health risk assessments. An elicitation with 32 experts informed relative prioritization of risks from chemical properties and human use factors, and the values for each chemical associated with each metric were approximated with data from EPA's CP_CAT database. Three different versions of the model were evaluated using distinct weight profiles, resulting in three different ranked chemical prioritizations with only a small degree of variation across weight profiles. Future work will aim to include greater input from human factors experts and better define qualitative metrics. © 2018 Society for Risk Analysis.

  17. Examining Patterns of Exposure to Family Violence in Preschool Children: A Latent Class Approach.

    Science.gov (United States)

    Grasso, Damion J; Petitclerc, Amélie; Henry, David B; McCarthy, Kimberly J; Wakschlag, Lauren S; Briggs-Gowan, Margaret J

    2016-12-01

    Young children can experience violence directly or indirectly in the home, with some children exposed to multiple forms of violence. These polyvictims often experience violence that is severe, chronic, and multifaceted. The current study used latent class analysis to identify and examine the pattern of profiles of exposure to family violence (i.e., violence directed towards the child and between caregivers) among a sample of 474 children ages 3-6 year who were drawn from the Multidimensional Assessment of Preschoolers Study (Wakschlag et al., 2014). The data yielded 3 classes: a polyvictimized class (n = 72; 15.2%) with high probability of exposure to all forms of violence, a harsh parenting class (n = 235; 49.5%), distinguished mainly by child-directed physical discipline in the absence of more severe forms of violence, and a low-exposure class (n = 167; 35.2%). Classes were differentiated by contextual factors, maternal characteristics, and mother-reported and observational indicators of parenting and child functioning with most effect sizes between medium and large. These findings add to emerging evidence linking polyvictimization to impaired caregiving and adverse psychological outcomes for children and offer important insight for prevention and intervention for this vulnerable population. Copyright © 2016 International Society for Traumatic Stress Studies.

  18. Influence of exposure to pesticides on telomere length in tobacco farmers: A biology system approach.

    Science.gov (United States)

    Kahl, Vivian Francília Silva; da Silva, Juliana; da Silva, Fernanda Rabaioli

    Various pesticides in the form of mixtures must be used to keep tobacco crops pest-free. Recent studies have shown a link between occupational exposure to pesticides in tobacco crops and increased damage to the DNA, mononuclei, nuclear buds and binucleated cells in buccal cells as well as micronuclei in lymphocytes. Furthermore, pesticides used specifically for tobacco crops shorten telomere length (TL) significantly. However, the molecular mechanism of pesticide action on telomere length is not fully understood. Our study evaluated the interaction between a complex mixture of chemical compounds (tobacco cultivation pesticides plus nicotine) and proteins associated with maintaining TL, as well as the biological processes involved in this exposure by System Biology tools to provide insight regarding the influence of pesticide exposure on TL maintenance in tobacco farmers. Our analysis showed that one cluster was associated with TL proteins that act in bioprocesses such as (i) telomere maintenance via telomere lengthening; (ii) senescence; (iii) age-dependent telomere shortening; (iv) DNA repair (v) cellular response to stress and (vi) regulation of proteasome ubiquitin-dependent protein catabolic process. We also describe how pesticides and nicotine regulate telomere length. In addition, pesticides inhibit the ubiquitin proteasome system (UPS) and consequently increase proteins of the shelterin complex, avoiding the access of telomerase in telomere and, nicotine activates UPS mechanisms and promotes the degradation of human telomerase reverse transcriptase (hTERT), decreasing telomerase activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. An international comparison of models and approaches for the estimation of the radiological exposure of non-human biota

    International Nuclear Information System (INIS)

    Beresford, Nicholas A.; Balonov, Mikhail; Beaugelin-Seiller, Karine; Brown, Justin; Copplestone, David; Hingston, Joanne L.; Horyna, Jan; Hosseini, Ali; Howard, Brenda J.; Kamboj, Sunita; Nedveckaite, Tatjana; Olyslaegers, Geert; Sazykina, Tatiana; Vives i Batlle, Jordi; Yankovich, Tamara L.; Yu, Charley

    2008-01-01

    Over the last decade a number of models and approaches have been developed for the estimation of the exposure of non-human biota to ionising radiations. In some countries these are now being used in regulatory assessments. However, to date there has been no attempt to compare the outputs of the different models used. This paper presents the work of the International Atomic Energy Agency's EMRAS Biota Working Group which compares the predictions of a number of such models in model-model and model-data inter-comparisons

  20. Application of the positive matrix factorization approach to identify heavy metal sources in sediments. A case study on the Mexican Pacific Coast.

    Science.gov (United States)

    González-Macías, C; Sánchez-Reyna, G; Salazar-Coria, L; Schifter, I

    2014-01-01

    During the last two decades, sediments collected in different sources of water bodies of the Tehuantepec Basin, located in the southeast of the Mexican Pacific Coast, showed that concentrations of heavy metals may pose a risk to the environment and human health. The extractable organic matter, geoaccumulation index, and enrichment factors were quantified for arsenic, cadmium, copper, chromium, nickel, lead, vanadium, zinc, and the fine-grained sediment fraction. The non-parametric SiZer method was applied to assess the statistical significance of the reconstructed metal variation along time. This inference method appears to be particularly natural and well suited to temperature and other environmental reconstructions. In this approach, a collection of smooth of the reconstructed metal concentrations is considered simultaneously, and inferences about the significance of the metal trends can be made with respect to time. Hence, the database represents a consolidated set of available and validated water and sediment data of an urban industrialized area, which is very useful as case study site. The positive matrix factorization approach was used in identification and source apportionment of the anthropogenic heavy metals in the sediments. Regionally, metals and organic matter are depleted relative to crustal abundance in a range of 45-55 %, while there is an inorganic enrichment from lithogenous/anthropogenic sources of around 40 %. Only extractable organic matter, Pb, As, and Cd can be related with non-crustal sources, suggesting that additional input cannot be explained by local runoff or erosion processes.

  1. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  2. Matrix thermalization

    International Nuclear Information System (INIS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-01-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  3. Matrix thermalization

    Science.gov (United States)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  4. Matrix thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)

    2017-02-08

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  5. A novel approach reveals that zinc oxide nanoparticles are bioavailable and toxic after dietary exposures

    Science.gov (United States)

    Croteau, M.-N.; Dybowska, A.D.; Luoma, S.N.; Valsami-Jones, E.

    2011-01-01

    If engineered nanomaterials are released into the environment, some are likely to end up associated with the food of animals due to aggregation and sorption processes. However, few studies have considered dietary exposure of nanomaterials. Here we show that zinc (Zn) from isotopically modified 67ZnO particles is efficiently assimilated by freshwater snails when ingested with food. The 67Zn from nano-sized 67ZnO appears as bioavailable as 67Zn internalized by diatoms. Apparent agglomeration of the zinc oxide (ZnO) particles did not reduce bioavailability, nor preclude toxicity. In the diet, ZnO nanoparticles damage digestion: snails ate less, defecated less and inefficiently processed the ingested food when exposed to high concentrations of ZnO. It was not clear whether the toxicity was due to the high Zn dose achieved with nanoparticles or to the ZnO nanoparticles themselves. Further study of exposure from nanoparticles in food would greatly benefit assessment of ecological and human health risks. ?? 2011 Informa UK, Ltd.

  6. A systematic approach to community resilience that reduces the federal fiscal exposure to climate change

    Science.gov (United States)

    Stwertka, C.; Albert, M. R.; White, K. D.

    2016-12-01

    Despite widely available information about the adverse impacts of climate change to the public, including both private sector and federal fiscal exposure, there remain opportunities to effectively translate this knowledge into action. Further delay of climate preparedness and resilience actions imposes a growing toll on American communities and the United States fiscal budget. We hypothesize that a set of four criteria must be met before a community can translate climate disturbances into preparedness action. We examine four case studies to review these proposed criteria, we discuss the critical success factors that can build community resilience, and we define an operational strategy that could support community resilience while reducing the federal fiscal exposure to climate change. This operational strategy defines a community response system that integrates social science research, builds on the strengths of different sectors, values existing resources, and reduces the planning-to-action time. Our next steps are to apply this solution in the field, and to study the dynamics of community engagement and the circular economy.

  7. Assessing exposures and risks in heterogeneously contaminated areas: A simulation approach

    International Nuclear Information System (INIS)

    Fingleton, D.J.; MacDonell, M.M.; Haroun, L.A.; Oezkaynak, H.; Butler, D.A.; Jianping Xue

    1991-01-01

    The US Department of Energy (DOE) is responsible for cleanup activities at a number of facilities under its Environmental Restoration and Waste Management Program. The major goals of this program are to eliminate potential hazards to human health and the environment that are associated with contamination of these sites and, to the extent possible, make surplus real property available for other uses. The assessment of potential baseline health risks and ecological impacts associated with a contaminated site is an important component of the remedial investigation/feasibility study (RI/FS) process required at all Superfund sites. The purpose of this paper is to describe one phase of the baseline assessment, i.e., the characterization of human health risks associated with exposure to chemical contaminants in air and on interior building surfaces at a contaminated site. The model combines data on human activity patterns in a particular microenvironment within a building with contaminant concentrations in that microenvironment to calculate personal exposure profiles and risks within the building. The results of the building assessment are presented as probability distributions functions and cumulative distribution functions, which show the variability and uncertainty in the risk estimates. 23 refs., 2 figs., 1 tab

  8. Assessing exposures and risks in heterogeneously contaminated areas: A simulation approach

    International Nuclear Information System (INIS)

    Fingleton, D.J.; MacDonell, M.M.; Haroun, L.A.; Oezkaynak, H.; Butler, D.A.; Xue, J.

    1991-01-01

    The US Department of Energy (DOE) is responsible for cleanup activities at a number of facilities under its Environmental Restoration and Waste Management Program. The major goals of this program are to eliminate potential hazards to human health and the environment that are associated with contamination of these sites and, to the extent possible, make surplus real property available for other uses. The assessment of potential baseline health risks and ecological impacts associated with a contaminated site is an important component of the remedial investigation/feasibility study (RI/FS) process required at all Superfund sites. The purpose of this paper is to describe one phase of the baseline assessment, i.e., the characterization of human health risks associated with exposure to chemical contaminants in air and on interior building surfaces at a contaminated site. The model combines data on human activity patterns in a particular microenvironment within a building with contaminant concentrations in that microenvironment to calculate personal exposure profiles and risks within the building. The results of the building assessment are presented as probability distribution functions and cumulative distribution functions, which show the variability and uncertainty in the risk estimates

  9. Health risk evaluation associated to Planktothrix rubescens: An integrated approach to design tailored monitoring programs for human exposure to cyanotoxins.

    Science.gov (United States)

    Manganelli, Maura; Scardala, Simona; Stefanelli, Mara; Vichi, Susanna; Mattei, Daniela; Bogialli, Sara; Ceccarelli, Piegiorgio; Corradetti, Ernesto; Petrucci, Ines; Gemma, Simonetta; Testai, Emanuela; Funari, Enzo

    2010-03-01

    Increasing concern for human health related to cyanotoxin exposure imposes the identification of pattern and level of exposure; however, current monitoring programs, based on cyanobacteria cell counts, could be inadequate. An integrated approach has been applied to a small lake in Italy, affected by Planktothrix rubescens blooms, to provide a scientific basis for appropriate monitoring program design. The cyanobacterium dynamic, the lake physicochemical and trophic status, expressed as nutrients concentration and recycling rates due to bacterial activity, the identification/quantification of toxic genotype and cyanotoxin concentration have been studied. Our results indicate that low levels of nutrients are not a marker for low risk of P. rubescens proliferation and confirm that cyanobacterial density solely is not a reliable parameter to assess human exposure. The ratio between toxic/non-toxic cells, and toxin concentrations, which can be better explained by toxic population dynamic, are much more diagnostic, although varying with time and environmental conditions. The toxic fraction within P. rubescens population is generally high (30-100%) and increases with water depth. The ratio toxic/non-toxic cells is lowest during the bloom, suggesting a competitive advantage for non-toxic cells. Therefore, when P. rubescens is the dominant species, it is important to analyze samples below the thermocline, and quantitatively estimate toxic genotype abundance. In addition, the identification of cyanotoxin content and congeners profile, with different toxic potential, are crucial for risk assessment. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Establishing an air pollution monitoring network for intra-urban population exposure assessment : a location-allocation approach

    Energy Technology Data Exchange (ETDEWEB)

    Kanaroglou, P.S. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology; Jerrett, M.; Beckerman, B.; Arain, M.A. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology]|[McMaster Univ., Hamilton, ON (Canada). McMaster Inst. of Environment and Health; Morrison, J. [Carleton Univ., Ottawa, ON (Canada). School of Computer Science; Gilbert, N.L. [Health Canada, Ottawa, ON (Canada). Air Health Effects Div; Brook, J.R. [Meteorological Service of Canada, Toronto, ON (Canada)

    2004-10-01

    A study was conducted to assess the relation between traffic-generated air pollution and health reactions ranging from childhood asthma to mortality from lung cancer. In particular, it developed a formal method of optimally locating a dense network of air pollution monitoring stations in order to derive an exposure assessment model based on the data obtained from the monitoring stations and related land use, population and biophysical information. The method for determining the locations of 100 nitrogen dioxide monitors in Toronto, Ontario focused on land use, transportation infrastructure and the distribution of at-risk populations. The exposure assessment produced reasonable estimates at the intra-urban scale. This method for locating air pollution monitors effectively maximizes sampling coverage in relation to important socio-demographic characteristics and likely pollution variability. The location-allocation approach integrates many variables into the demand surface to reconfigure a monitoring network and is especially useful for measuring traffic pollutants with fine-scale spatial variability. The method also shows great promise for improving the assessment of exposure to ambient air pollution in epidemiologic studies. 19 refs., 3 tabs., 4 figs.

  11. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  12. The Exopolysaccharide Matrix

    Science.gov (United States)

    Koo, H.; Falsetta, M.L.; Klein, M.I.

    2013-01-01

    Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647

  13. The basic approaches to evaluation of effects of the long-therm radiation exposure in a range of 'low' doses

    International Nuclear Information System (INIS)

    Takhauov, R.M.; Karpov, A.B.; Litvyakov, N.V.

    2010-01-01

    for evaluation the genetic effects of radiation exposure. DNA bank donors are workers of Siberian Group of Chemical Enterprises (SGCE) their descendants and also residents of the nearby territories. Taking into account the value of the accumulated material, it should be noted that DNA bank is one of the world's biggest biological material storage obtained from the exposed to long-term radiation influence in the range of 'low' doses. Due to present approaches using for evaluation of traditional and proposal stochastic effects of long-term radiation exposure in 'low' doses we can obtain the objective information of fundamental character. On the basis of this data it is possibility the additional of any radiation safety postulates and the development of the most importance diseases modern prophylactic strategy for populations exposuring radiation.

  14. Pharmacokinetics in Drug Discovery: An Exposure-Centred Approach to Optimising and Predicting Drug Efficacy and Safety.

    Science.gov (United States)

    Reichel, Andreas; Lienau, Philip

    2016-01-01

    The role of pharmacokinetics (PK) in drug discovery is to support the optimisation of the absorption, distribution, metabolism and excretion (ADME) properties of lead compounds with the ultimate goal to attain a clinical candidate which achieves a concentration-time profile in the body that is adequate for the desired efficacy and safety profile. A thorough characterisation of the lead compounds aiming at the identification of the inherent PK liabilities also includes an early generation of PK/PD relationships linking in vitro potency and target exposure/engagement with expression of pharmacological activity (mode-of-action) and efficacy in animal studies. The chapter describes an exposure-centred approach to lead generation, lead optimisation and candidate selection and profiling that focuses on a stepwise generation of an understanding between PK/exposure and PD/efficacy relationships by capturing target exposure or surrogates thereof and cellular mode-of-action readouts in vivo. Once robust PK/PD relationship in animal PD models has been constructed, it is translated to anticipate the pharmacologically active plasma concentrations in patients and the human therapeutic dose and dosing schedule which is also based on the prediction of the PK behaviour in human as described herein. The chapter outlines how the level of confidence in the predictions increases with the level of understanding of both the PK and the PK/PD of the new chemical entities (NCE) in relation to the disease hypothesis and the ability to propose safe and efficacious doses and dosing schedules in responsive patient populations. A sound identification of potential drug metabolism and pharmacokinetics (DMPK)-related development risks allows proposing of an effective de-risking strategy for the progression of the project that is able to reduce uncertainties and to increase the probability of success during preclinical and clinical development.

  15. A user exposure based approach for non-structural road network vulnerability analysis.

    Directory of Open Access Journals (Sweden)

    Lei Jin

    Full Text Available Aiming at the dense urban road network vulnerability without structural negative consequences, this paper proposes a novel non-structural road network vulnerability analysis framework. Three aspects of the framework are mainly described: (i the rationality of non-structural road network vulnerability, (ii the metrics for negative consequences accounting for variant road conditions, and (iii the introduction of a new vulnerability index based on user exposure. Based on the proposed methodology, a case study in the Sioux Falls network which was usually threatened by regular heavy snow during wintertime is detailedly discussed. The vulnerability ranking of links of Sioux Falls network with respect to heavy snow scenario is identified. As a result of non-structural consequences accompanied by conceivable degeneration of network, there are significant increases in generalized travel time costs which are measurements for "emotionally hurt" of topological road network.

  16. Methylmercury exposure in a subsistence fishing community in Lake Chapala, Mexico: an ecological approach

    Directory of Open Access Journals (Sweden)

    Abercrombie Mary I

    2010-01-01

    Full Text Available Abstract Background Elevated concentrations of mercury have been documented in fish in Lake Chapala in central Mexico, an area that is home to a large subsistence fishing community. However, neither the extent of human mercury exposure nor its sources and routes have been elucidated. Methods Total mercury concentrations were measured in samples of fish from Lake Chapala; in sections of sediment cores from the delta of Rio Lerma, the major tributary to the lake; and in a series of suspended-particle samples collected at sites from the mouth of the Lerma to mid-Lake. A cross-sectional survey of 92 women ranging in age from 18-45 years was conducted in three communities along the Lake to investigate the relationship between fish consumption and hair mercury concentrations among women of child-bearing age. Results Highest concentrations of mercury in fish samples were found in carp (mean 0.87 ppm. Sediment data suggest a pattern of moderate ongoing contamination. Analyses of particles filtered from the water column showed highest concentrations of mercury near the mouth of the Lerma. In the human study, 27.2% of women had >1 ppm hair mercury. On multivariable analysis, carp consumption and consumption of fish purchased or captured from Lake Chapala were both associated with significantly higher mean hair mercury concentrations. Conclusions Our preliminary data indicate that, despite a moderate level of contamination in recent sediments and suspended particulate matter, carp in Lake Chapala contain mercury concentrations of concern for local fish consumers. Consumption of carp appears to contribute significantly to body burden in this population. Further studies of the consequences of prenatal exposure for child neurodevelopment are being initiated.

  17. An exposure-effect approach for evaluating ecosystem-wide risks from human activities

    NARCIS (Netherlands)

    Knights, A.M.; Piet, G.J.; Jongbloed, R.H.; Tamis, J.E.; Robinson, L.A.

    2015-01-01

    Ecosystem-based management (EBM) is promoted as the solution for sustainable use. An ecosystem-wide assessment methodology is therefore required. In this paper, we present an approach to assess the risk to ecosystem components from human activities common to marine and coastal ecosystems. We build

  18. Statistical modelling approach to derive quantitative nanowastes classification index; estimation of nanomaterials exposure

    CSIR Research Space (South Africa)

    Ntaka, L

    2013-08-01

    Full Text Available . In this work, statistical inference approach specifically the non-parametric bootstrapping and linear model were applied. Data used to develop the model were sourced from the literature. 104 data points with information on aggregation, natural organic matter...

  19. A multi-nuclide approach to quantify long-term erosion rates and exposure history through multiple glacial-interglacial cycles

    DEFF Research Database (Denmark)

    Strunk, Astrid; Larsen, Nicolaj Krog; Knudsen, Mads Faurschou

    Cosmogenic nuclides are traditionally used to either determine the glaciation history or the denudation history of the most recent exposure period. A few studies use the cosmogenic nuclides to determine the cumulative exposure and burial durations of a sample. However, until now it has not been...... possible to resolve the complex pattern of exposure history under a fluctuating ice sheet. In this study, we quantify long-term erosion rates along with durations of multiple exposure periods in West Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to existing 10Be and 26Al....... The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated and measured cosmogenic nuclide concentrations. It is a fundamental assumption of the model that the exposure history at the site/location can be divided into two distinct regimes: i...

  20. Environmental transport and human exposure: A multimedia approach in health-risk policy

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.

    1992-05-01

    In his treatise Air, Water, and Places, the ancient-Greek physician Hippocrates demonstrated that the appearance of disease in human populations is influenced by the quality of air, water, and food; the topography of the land; and general living habits. This approach is still relevant and, indeed, the conerstone of modem efforts to relate public health to environmental factors. What has changed is the precision with which we can measure and model these long-held relationships. Environmental scientists recognize that plants, animals, and humans encounter environmental contaminants via complex transfers through air, water, and food and use multimedia models to evaluate these transfers. In this report, I explore the use of multimedia models both to examine pollution trends and as a basis for characterizing human health risks and ecological risks. The strengths and weaknesses of the approach are discussed.

  1. A Population Approach to Transportation Planning: Reducing Exposure to Motor-Vehicles

    Directory of Open Access Journals (Sweden)

    Daniel Fuller

    2013-01-01

    Full Text Available Transportation planning and public health have important historical roots. To address common challenges, including road traffic fatalities, integration of theories and methods from both disciplines is required. This paper presents an overview of Geoffrey Rose's strategy of preventive medicine applied to road traffic fatalities. One of the basic principles of Rose's strategy is that a large number of people exposed to a small risk can generate more cases than a small number exposed to a high risk. Thus, interventions should address the large number of people exposed to the fundamental causes of diseases. Exposure to moving vehicles could be considered a fundamental cause of road traffic deaths and injuries. A global reduction in the amount of kilometers driven would result in a reduction of the likelihood of collisions for all road users. Public health and transportation research must critically appraise their practice and engage in informed dialogue with the objective of improving mobility and productivity while simultaneously reducing the public health burden of road deaths and injuries.

  2. A comparison of mindfulness, nonjudgmental, and cognitive dissonance-based approaches to mirror exposure.

    Science.gov (United States)

    Luethcke, Cynthia A; McDaniel, Leda; Becker, Carolyn Black

    2011-06-01

    This study compares different versions of mirror exposure (ME), a body image intervention with research support. ME protocols were adapted to maximize control and comparability, and scripted for delivery by research assistants. Female undergraduates (N=168) were randomly assigned to receive mindfulness-based (MB; n=58), nonjudgmental (NJ; n=55), or cognitive dissonance-based (CD, n=55) ME. Participants completed the Body Image Avoidance Questionnaire (BIAQ), Body Checking Questionnaire (BCQ), Satisfaction with Body Parts Scale (SBPS), Beck Depression Inventory-II (BDI-II), and Eating Disorders Examination Questionnaire (EDE-Q) at pre-treatment, post-treatment, and 1-month follow-up. Mixed models ANOVAs revealed a significant main effect of time on all measures, and no significant time by condition interaction for any measures except the SBPS. Post-hoc analysis revealed that only CD ME significantly improved SBPS outcome. Results suggest that all versions of ME reduce eating disorder risk factors, but only CD ME improves body satisfaction. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A simple and sensitive approach to quantify methyl farnesoate in whole arthropods by matrix-solid phase dispersion and gas chromatography-mass spectrometry.

    Science.gov (United States)

    Montes, Rosa; Rodil, Rosario; Neuparth, Teresa; Santos, Miguel M; Cela, Rafael; Quintana, José Benito

    2017-07-28

    Methyl farnesoate (MF) is an arthropod hormone that plays a key role in the physiology of several arthropods' classes being implicated in biological processes such as molting and reproduction. The development of an analytical technique to quantify the levels of this compound in biological tissues can be of major importance for the field of aquaculture/apiculture conservation and in endocrine disruption studies. Therefore, the aim of this study was to develop a simple and sensitive method to measure native levels of MF in the tissue of three representative species from different arthropods classes with environmental and/or economic importance. Thus, a new approach using whole organisms and the combination of matrix solid-phase dispersion with gas chromatography coupled to mass spectrometry was developed. This method allows quantifying endogenous MF at low levels (LOQs in the 1.2-3.1ng/g range) in three arthropod species, and could be expanded to additional arthropod classes. The found levels ranged between 2 and 12ng/g depending on the studied species and gender. The overall recovery of the method was evaluated and ranged between 69 and 96%. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Direct identification of bacteria in blood culture by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry: a new methodological approach.

    Science.gov (United States)

    Kroumova, Vesselina; Gobbato, Elisa; Basso, Elisa; Mucedola, Luca; Giani, Tommaso; Fortina, Giacomo

    2011-08-15

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently been demonstrated to be a powerful tool for the rapid identification of bacteria from growing colonies. In order to speed up the identification of bacteria, several authors have evaluated the usefulness of this MALDI-TOF MS technology for the direct and quick identification bacteria from positive blood cultures. The results obtained so far have been encouraging but have also shown some limitations, mainly related to the bacterial growth and to the presence of interference substances belonging to the blood cultures. In this paper, we present a new methodological approach that we have developed to overcome these limitations, based mainly on an enrichment of the sample into a growing medium before the extraction process, prior to mass spectrometric analysis. The proposed method shows important advantages for the identification of bacterial strains, yielding an increased identification score, which gives higher confidence in the results. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Matrix inequalities

    CERN Document Server

    Zhan, Xingzhi

    2002-01-01

    The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.

  6. MATLAB matrix algebra

    CERN Document Server

    Pérez López, César

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...

  7. Hyperon-nucleon interaction and the 2.13 GeV strange dibaryonic system in the P-matrix approach

    International Nuclear Information System (INIS)

    Kerbikov, B.O.; Bakker, B.L.G.; Daling, R.

    1988-01-01

    A description is presented of the low-energy YN (Y = Λ, Σ) interactions within the Jaffe-Low P-matrix formalism. Analysing the enhancement of the Λp invariant mass near the Σ + n threshold we conclude that it should be identified as a P-matrix partner of the deuteron and not as a six-quark dibaryon resonance. (orig.)

  8. MOF-mixed matrix membranes : precise dispersion of MOF particles with better compatibility via a particle fusion approach for enhanced gas separation properties

    NARCIS (Netherlands)

    Shahid, Salman; Nijmeijer, Kitty; Nehache, Sabrina; Vankelecom, Ivo; Deratani, Andre; Quemener, Damien

    2015-01-01

    Mixed matrix membranes (MMMs) incorporating conventional fillers frequently suffer from insufficient adhesion between the polymer matrix and the fillers. This often results in the formation of non-selective voids at the filler/polymer interface, which decreases the performance of the membrane. A

  9. MOF-mixed matrix membranes: Precise dispersion of MOF particles with better compatibility via a particle fusion approach for enhanced gas separation properties

    NARCIS (Netherlands)

    Shahid, S.; Nijmeijer, Dorothea C.; Nehache, Sabrina; Vankelecom, Ivo; Deratani, Andre; Quemener, Damien

    2015-01-01

    Mixed matrix membranes (MMMs) incorporating conventional fillers frequently suffer from insufficient adhesion between the polymer matrix and the fillers. This often results in the formation of non-selective voids at the filler/polymer interface, which decreases the performance of the membrane. A

  10. Improvements in Modelling Bystander and Resident Exposure to Pesticide Spray Drift: Investigations into New Approaches for Characterizing the 'Collection Efficiency' of the Human Body.

    Science.gov (United States)

    Butler Ellis, M Clare; Kennedy, Marc C; Kuster, Christian J; Alanis, Rafael; Tuck, Clive R

    2018-03-17

    The BREAM (Bystander and Resident Exposure Assessment Model) (Kennedy et al. in BREAM: A probabilistic bystander and resident exposure assessment model of spray drift from an agricultural boom sprayer. Comput Electron Agric 2012;88:63-71) for bystander and resident exposure to spray drift from boom sprayers has recently been incorporated into the European Food Safety Authority (EFSA) guidance for determining non-dietary exposures of humans to plant protection products. The component of BREAM, which relates airborne spray concentrations to bystander and resident dermal exposure, has been reviewed to identify whether it is possible to improve this and its description of variability captured in the model. Two approaches have been explored: a more rigorous statistical analysis of the empirical data and a semi-mechanistic model based on established studies combined with new data obtained in a wind tunnel. A statistical comparison between field data and model outputs was used to determine which approach gave the better prediction of exposures. The semi-mechanistic approach gave the better prediction of experimental data and resulted in a reduction in the proposed regulatory values for the 75th and 95th percentiles of the exposure distribution.

  11. Multipathway Quantitative Assessment of Exposure to Fecal Contamination for Young Children in Low-Income Urban Environments in Accra, Ghana: The SaniPath Analytical Approach.

    Science.gov (United States)

    Wang, Yuke; Moe, Christine L; Null, Clair; Raj, Suraja J; Baker, Kelly K; Robb, Katharine A; Yakubu, Habib; Ampofo, Joseph A; Wellington, Nii; Freeman, Matthew C; Armah, George; Reese, Heather E; Peprah, Dorothy; Teunis, Peter F M

    2017-10-01

    Lack of adequate sanitation results in fecal contamination of the environment and poses a risk of disease transmission via multiple exposure pathways. To better understand how eight different sources contribute to overall exposure to fecal contamination, we quantified exposure through multiple pathways for children under 5 years old in four high-density, low-income, urban neighborhoods in Accra, Ghana. We collected more than 500 hours of structured observation of behaviors of 156 children, 800 household surveys, and 1,855 environmental samples. Data were analyzed using Bayesian models, estimating the environmental and behavioral factors associated with exposure to fecal contamination. These estimates were applied in exposure models simulating sequences of behaviors and transfers of fecal indicators. This approach allows us to identify the contribution of any sources of fecal contamination in the environment to child exposure and use dynamic fecal microbe transfer networks to track fecal indicators from the environment to oral ingestion. The contributions of different sources to exposure were categorized into four types (high/low by dose and frequency), as a basis for ranking pathways by the potential to reduce exposure. Although we observed variation in estimated exposure (10 8 -10 16 CFU/day for Escherichia coli ) between different age groups and neighborhoods, the greatest contribution was consistently from food (contributing > 99.9% to total exposure). Hands played a pivotal role in fecal microbe transfer, linking environmental sources to oral ingestion. The fecal microbe transfer network constructed here provides a systematic approach to study the complex interaction between contaminated environment and human behavior on exposure to fecal contamination.

  12. The metabolomic approach identifies a biological signature of low-dose chronic exposure to Cesium 137

    International Nuclear Information System (INIS)

    Grison, S.; Grandcolas, L.; Martin, J.C.

    2012-01-01

    Reports have described apparent biological effects of 137 Cs (the most persistent dispersed radionuclide) irradiation in people living in Chernobyl-contaminated territory. The sensitive analytical technology described here should now help assess the relation of this contamination to the observed effects. A rat model chronically exposed to 137 Cs through drinking water was developed to identify biomarkers of radiation-induced metabolic disorders, and the biological impact was evaluated by a metabolomic approach that allowed us to detect several hundred metabolites in biofluids and assess their association with disease states. After collection of plasma and urine from contaminated and non-contaminated rats at the end of the 9-months contamination period, analysis with a liquid chromatography coupled to mass spectrometry (LC-MS) system detected 742 features in urine and 1309 in plasma. Biostatistical discriminant analysis extracted a subset of 26 metabolite signals (2 urinary, 4 plasma non-polar, and 19 plasma polar metabolites) that in combination were able to predict from 68 up to 94% of the contaminated rats, depending on the prediction method used, with a misclassification rate as low as 5.3%. The difference in this metabolic score between the contaminated and non-contaminated rats was highly significant (P=0.019 after ANOVA cross-validation). In conclusion, our proof-of-principle study demonstrated for the first time the usefulness of a metabolomic approach for addressing biological effects of chronic low-dose contamination. We can conclude that a metabolomic signature discriminated 137 Cs-contaminated from control animals in our model. Further validation is nevertheless required together with full annotation of the metabolic indicators. (author)

  13. Plane-wave spectrum approach for the calculation of electromagnetic absorption under near-field exposure conditions

    International Nuclear Information System (INIS)

    Chatterjee, I.; Gandhi, O.P.; Hagmann, M.J.; Riazi, A.

    1980-01-01

    The exposure of humans to electromagnetic near fields has not been sufficiently emphasized by researcher. We have used the plane-wave-spectrum approach to evaluate the electromagnetic field and determine the energy deposited in a lossy, homogeneous, semi-infinite slab placed in the near field of a source leaking radiation. Values of the fields and absorbed energy in the target are obtained by vector summation of the contributions of all the plane waves into which the prescribed field is decomposed. Use of a fast Fourier transform algorithm contributes to the high efficiency of the computations. The numerical results show that, for field distributions that are nearly constant over a physical extent of at least a free-space wavelength, the energy coupled into the target is approximately equal to the resulting from plane-wave exposed

  14. Practice-Informed Approaches to Addressing Substance Abuse and Trauma Exposure in Urban Native Families Involved with Child Welfare.

    Science.gov (United States)

    Lucero, Nancy M; Bussey, Marian

    2015-01-01

    Similar to families from other groups, urban-based American Indian and Alaska Native ("Native") family members involved with the child welfare system due to substance abuse issues are also often challenged by untreated trauma exposure. The link between these conditions and the history of genocidal policies aimed at destroying Native family ties, as well as experiences of ongoing discrimination, bring added dimensions for consideration when pro- viding services to these families. Practice-based evidence indicates that the trauma-informed and culturally responsive model developed by the Denver Indian Family Resource Center (DIFRC) shows promise in reducing out-of-home placements and re-referrals in urban Native families with substance abuse and child welfare concerns, while also increasing caregiver capabilities, family safety, and child well-being. This article provides strategies from the DIFRC approach that non-Native caseworkers and supervisors can utilize to create an environment in their own agencies that supports culturally based practice with Native families while incorporating a trauma-informed understanding of service needs of these families. Casework consistent with this approach demonstrates actions that meet the Active Efforts requirement of the Indian Child Welfare Act (ICWA) as well as sound clinical practice. Intensive and proactive case management designed specifically for families with high levels of service needs is a key strategy when combined with utilizing a caseworker brief screening tool for trauma exposure; training caseworkers to recognize trauma symptoms, making timely referrals to trauma treatment by behavioral health specialists experienced in working with Native clients, and providing a consistent service environment that focuses on client safety and worker trustworthiness. Finally, suggestions are put forth for agencies seeking to enhance their cultural responsiveness and include increasing workers' understanding of cultural values

  15. Relative performance of different exposure modeling approaches for sulfur dioxide concentrations in the air in rural western Canada

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2008-07-01

    Full Text Available Abstract Background The main objective of this paper is to compare different methods for predicting the levels of SO2 air pollution in oil and gas producing area of rural western Canada. Month-long average air quality measurements were collected over a two-year period (2001–2002 at multiple locations, with some side-by-side measurements, and repeated time-series at selected locations. Methods We explored how accurately location-specific mean concentrations of SO2 can be predicted for 2002 at 666 locations with multiple measurements. Means of repeated measurements on the 666 locations in 2002 were used as the alloyed gold standard (AGS. First, we considered two approaches: one that uses one measurement from each location of interest; and the other that uses context data on proximity of monitoring sites to putative sources of emission in 2002. Second, we imagined that all of the previous year's (2001's data were also available to exposure assessors: 9,464 measurements and their context (month, proximity to sources. Exposure prediction approaches we explored with the 2001 data included regression modeling using either mixed or fixed effects models. Third, we used Bayesian methods to combine single measurements from locations in 2002 (not used to calculate AGS with different priors. Results The regression method that included both fixed and random effects for prediction (Best Linear Unbiased Predictor had the best agreement with the AGS (Pearson correlation 0.77 and the smallest mean squared error (MSE: 0.03. The second best method in terms of correlation with AGS (0.74 and MSE (0.09 was the Bayesian method that uses normal mixture prior derived from predictions of the 2001 mixed effects applied in the 2002 context. Conclusion It is likely that either collecting some measurements from the desired locations and time periods or predictions of a reasonable empirical mixed effects model perhaps is sufficient in most epidemiological applications. The

  16. Prenatal exposure to maternal smoking and childhood behavioural problems: a quasi-experimental approach.

    Science.gov (United States)

    McCrory, Cathal; Layte, Richard

    2012-11-01

    This retrospective cross-sectional paper examines the relationship between maternal smoking during pregnancy and children's behavioural problems at 9 years of age independent of a wide range of possible confounders. The final sample comprised 7,505 nine-year-old school children participating in the first wave of the Growing Up in Ireland study. The children were selected through the Irish national school system using a 2-stage sampling method and were representative of the nine-year population. Information on maternal smoking during pregnancy was obtained retrospectively at 9 years of age via parental recall and children's behavioural problems were assessed using the Strengths and Difficulties Questionnaire across separate parent and teacher-report instruments. A quasi-experimental approach using propensity score matching was used to create treatment (smoking) and control (non-smoking) groups which did not differ significantly in their propensity to smoke in terms of 16 observed characteristics. After matching on the propensity score, children whose mothers smoked during pregnancy were 3.5 % (p parent and teacher-report respectively. Maternal smoking during pregnancy was more strongly associated with externalising than internalising behavioural problems. Analysis of the dose-response relationship showed that the differential between matched treatment and control groups increased with level of maternal smoking. Given that smoking is a modifiable risk factor, the promotion of successful cessation in pregnancy may prevent potentially adverse long-term consequences.

  17. EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations

    International Nuclear Information System (INIS)

    Garbow, Burton S.; Cline, A.K.; Meyering, J.

    1993-01-01

    : Driver subroutine for a nonsym. tridiag. matrix; SVD: Singular value decomposition of rectangular matrix; TINVIT: Find some vectors of sym. tridiag. matrix; TQLRAT: Find all values of sym. tridiag. matrix; TQL1: Find all values of sym. tridiag. matrix; TQL2: Find all values/vectors of sym. tridiag. matrix; TRBAK1: Back transform vectors of matrix formed by TRED1; TRBAK3: Back transform vectors of matrix formed by TRED3; TRED1: Reduce sym. matrix to sym. tridiag. matrix; TRED2: Reduce sym. matrix to sym. tridiag. matrix; TRED3: Reduce sym. packed matrix to sym. tridiag. matrix; TRIDIB: Find some values of sym. tridiag. matrix; TSTURM: Find some values/vectors of sym. tridiag. matrix. 2 - Method of solution: Almost all the algorithms used in EISPACK are based on similarity transformations. Similarity transformations based on orthogonal and unitary matrices are particularly attractive from a numerical point of view because they do not magnify any errors present in the input data or introduced during the computation. Most of the techniques employed are constructive realizations of variants of Schur's theorem, 'Any matrix can be triangularized by a unitary similarity transformation'. It is usually not possible to compute Schur's transformation with a finite number of rational arithmetic operations. Instead, the algorithms employ a potentially infinite sequence of similarity transformations in which the resultant matrix approaches an upper triangular matrix. The sequence is terminated when all of the sub-diagonal elements of the resulting matrix are less than the roundoff errors involved in the computation. The diagonal elements are then the desired approximations to the eigenvalues of the original matrix and the corresponding eigenvectors can be calculated. Special algorithms deal with symmetric matrices. QR, LR, QL, rational QR, bisection QZ, and inverse iteration methods are used

  18. Matrix analysis

    CERN Document Server

    Bhatia, Rajendra

    1997-01-01

    A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...

  19. Prediction of paraquat exposure and toxicity in clinically ill poisoned patients: a model based approach.

    Science.gov (United States)

    Wunnapuk, Klintean; Mohammed, Fahim; Gawarammana, Indika; Liu, Xin; Verbeeck, Roger K; Buckley, Nicholas A; Roberts, Michael S; Musuamba, Flora T

    2014-10-01

    Paraquat poisoning is a medical problem in many parts of Asia and the Pacific. The mortality rate is extremely high as there is no effective treatment. We analyzed data collected during an ongoing cohort study on self-poisoning and from a randomized controlled trial assessing the efficacy of immunosuppressive therapy in hospitalized paraquat-intoxicated patients. The aim of this analysis was to characterize the toxicokinetics and toxicodynamics of paraquat in this population. A non-linear mixed effects approach was used to perform a toxicokinetic/toxicodynamic population analysis in a cohort of 78 patients. The paraquat plasma concentrations were best fitted by a two compartment toxicokinetic structural model with first order absorption and first order elimination. Changes in renal function were used for the assessment of paraquat toxicodynamics. The estimates of toxicokinetic parameters for the apparent clearance, the apparent volume of distribution and elimination half-life were 1.17 l h(-1) , 2.4 l kg(-1) and 87 h, respectively. Renal function, namely creatinine clearance, was the most significant covariate to explain between patient variability in paraquat clearance.This model suggested that a reduction in paraquat clearance occurred within 24 to 48 h after poison ingestion, and afterwards the clearance was constant over time. The model estimated that a paraquat concentration of 429 μg l(-1) caused 50% of maximum renal toxicity. The immunosuppressive therapy tested during this study was associated with only 8% improvement of renal function. The developed models may be useful as prognostic tools to predict patient outcome based on patient characteristics on admission and to assess drug effectiveness during antidote drug development. © 2014 The British Pharmacological Society.

  20. Association of Protein Translation and Extracellular Matrix Gene Sets with Breast Cancer Metastasis: Findings Uncovered on Analysis of Multiple Publicly Available Datasets Using Individual Patient Data Approach.

    Directory of Open Access Journals (Sweden)

    Nilotpal Chowdhury

    Full Text Available Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis.The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets.Four microarray series (having 742 patients were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate - adjusted for expression of Cell cycle related genes and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA.Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed.To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and

  1. Association of Protein Translation and Extracellular Matrix Gene Sets with Breast Cancer Metastasis: Findings Uncovered on Analysis of Multiple Publicly Available Datasets Using Individual Patient Data Approach.

    Science.gov (United States)

    Chowdhury, Nilotpal; Sapru, Shantanu

    2015-01-01

    Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis. The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS) in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets. Four microarray series (having 742 patients) were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate - adjusted for expression of Cell cycle related genes) and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA). Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM) gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed. To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and interesting

  2. Matrix solid-phase dispersion on column clean-up/pre-concentration as a novel approach for fast isolation of abuse drugs from human hair.

    Science.gov (United States)

    Míguez-Framil, Martha; Moreda-Piñeiro, Antonio; Bermejo-Barrera, Pilar; Alvarez-Freire, Iván; Tabernero, María Jesús; Bermejo, Ana María

    2010-10-08

    A simple and fast sample pre-treatment method based on matrix solid-phase dispersion (MSPD) for isolating cocaine, benzoylecgonine (BZE), codeine, morphine and 6-monoacethylmorphine (6-MAM) from human hair has been developed. The MSPD approach consisted of using alumina (1.80 g) as a dispersing agent and 0.6M hydrochloric acid (4 mL) as an extracting solvent. For a fixed hair sample mass of 0.050 g, the alumina mass to sample mass ratio obtained was 36. A previously conditioned Oasis HLB cartridge (2 mL methanol, plus 2 mL ultrapure water, plus 1 mL of 0.2M/0.2M sodium hydroxide/boric acid buffer solution at pH 9.2) was attached to the end of the MSPD syringe for on column clean-up of the hydrochloric acid extract and for transferring the target compounds to a suitable solvent for gas chromatography (GC) analysis. Therefore, the adsorbed analytes were directly eluted from the Oasis HLB cartridges with 2 mL of 2% acetic acid in methanol before concentration by N(2) stream evaporation and dry extract derivatization with N-methyl-tert-butylsilyltrifluoroacetamide (BSTFA) and chlorotrimethylsilane (TMCS). The optimization/evaluation of all the factors affecting the MSPD and on column clean-up procedures has led to a fast sample treatment, and analytes extraction and pre-concentration can be finished in approximately 30 min. The developed method has been applied to eight hair samples from poli-drug abusers and measured analyte concentrations have been found to be statistically similar (95% confidence interval) to those obtained after a conventional enzymatic hydrolysis method (Pronase E). Copyright © 2010. Published by Elsevier B.V.

  3. Identification of urinary biomarkers of exposure to di-(2-propylheptyl) phthalate using high-resolution mass spectrometry and two data-screening approaches.

    Science.gov (United States)

    Shih, Chia-Lung; Liao, Pao-Mei; Hsu, Jen-Yi; Chung, Yi-Ning; Zgoda, Victor G; Liao, Pao-Chi

    2018-02-01

    Di-(2-propylheptyl) phthalate (DPHP) is a plasticizer used in polyvinyl chloride and vinyl chloride copolymer that has been suggested to be a toxicant in rats and may affect human health. Because the use of DPHP is increasing, the general German population is being exposed to DPHP. Toxicant metabolism is important for human toxicant exposure assessments. To date, the knowledge regarding DPHP metabolism has been limited, and only four metabolites have been identified in human urine. Ultra-performance liquid chromatography was coupled with Orbitrap high-resolution mass spectrometry (MS) and two data-screening approaches-the signal mining algorithm with isotope tracing (SMAIT) and the mass defect filter (MDF)-for DPHP metabolite candidate discovery. In total, 13 and 104 metabolite candidates were identified by the two approaches, respectively, in in vitro DPHP incubation samples. Of these candidates, 17 were validated as tentative exposure biomarkers using a rat model, 13 of which have not been reported in the literature. The two approaches generated rather different tentative DPHP exposure biomarkers, indicating that these approaches are complementary for discovering exposure biomarkers. Compared with the four previously reported DPHP metabolites, the three tentative novel biomarkers had higher peak intensity ratios, and two were confirmed as DPHP hydroxyl metabolites based on their MS/MS product ion profiles. These three tentative novel biomarkers should be further investigated for potential application in human exposure assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Matrix pentagons

    Science.gov (United States)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  5. Matrix pentagons

    Directory of Open Access Journals (Sweden)

    A.V. Belitsky

    2017-10-01

    Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  6. A multi-platform metabolomics approach demonstrates changes in energy metabolism and the transsulfuration pathway in Chironomus tepperi following exposure to zinc

    International Nuclear Information System (INIS)

    Long, Sara M.; Tull, Dedreia L.; Jeppe, Katherine J.; De Souza, David P.; Dayalan, Saravanan; Pettigrove, Vincent J.; McConville, Malcolm J.; Hoffmann, Ary A.

    2015-01-01

    Highlights: • An integrated metabolomics approach was applied to examine zinc exposure in midges. • Changes in carbohydrate and energy metabolism were observed using GC–MS. • Transsulfuration pathway is affected by zinc exposure. • Heavy metals other than zinc affect the transsulfuration pathways differently. - Abstract: Measuring biological responses in resident biota is a commonly used approach to monitoring polluted habitats. The challenge is to choose sensitive and, ideally, stressor-specific endpoints that reflect the responses of the ecosystem. Metabolomics is a potentially useful approach for identifying sensitive and consistent responses since it provides a holistic view to understanding the effects of exposure to chemicals upon the physiological functioning of organisms. In this study, we exposed the aquatic non-biting midge, Chironomus tepperi, to two concentrations of zinc chloride and measured global changes in polar metabolite levels using an untargeted gas chromatography–mass spectrometry (GC–MS) analysis and a targeted liquid chromatography–mass spectrometry (LC–MS) analysis of amine-containing metabolites. These data were correlated with changes in the expression of a number of target genes. Zinc exposure resulted in a reduction in levels of intermediates in carbohydrate metabolism (i.e., glucose 6-phosphate, fructose 6-phosphate and disaccharides) and an increase in a number of TCA cycle intermediates. Zinc exposure also resulted in decreases in concentrations of the amine containing metabolites, lanthionine, methionine and cystathionine, and an increase in metallothionein gene expression. Methionine and cystathionine are intermediates in the transsulfuration pathway which is involved in the conversion of methionine to cysteine. These responses provide an understanding of the pathways affected by zinc toxicity, and how these effects are different to other heavy metals such as cadmium and copper. The use of complementary

  7. Existing Regulatory Approaches to Reducing Exposures to Chemical- and Product-Based Risk and Their Applicability to Diet-Related Chronic Disease.

    Science.gov (United States)

    Cohen, Deborah A; Knopman, Debra S

    2018-04-17

    We aimed to identify and categorize the types of policies that have been adopted to protect Americans from harmful exposures that could also be relevant for addressing diet-related chronic diseases. This article examines and categorizes the rationales behind government regulation. Our interest in the historical analysis is to inform judgments about how best to address newly emergent risks involving diet-related chronic disease within existing regulatory and information-based frameworks. We assessed exemplars of regulation with respect to harmful exposures from air, water, and food, as well as regulations that are intended to modify voluntary behaviors. Following the comparative analysis, we explored how exposures that lead to diet-related chronic diseases among the general population fit within models of regulation adopted for other comparable risks. We identified five rationales and five approaches that protect people from harmful exposures. Reasons for regulation include: protection from involuntary exposure to risk, high risk of death or chronic illness, ubiquity of risk, counteraction to limit compulsive behaviors, and promotion of population health. Regulatory approaches include: mandatory limits on use, mandatory limits on exposure, mandatory controls on quality, mandatory labeling, and voluntary guidance. In contrast to the use of mandates, the prevention of diet-related chronic diseases thus far has largely relied on information-only approaches and voluntary adoption of guidelines. There is ample precedent for mandatory regulatory approaches that could address harms related to exposure to unhealthy diets, but several barriers to action would need to be overcome. © 2018 Society for Risk Analysis.

  8. A multi-platform metabolomics approach demonstrates changes in energy metabolism and the transsulfuration pathway in Chironomus tepperi following exposure to zinc

    Energy Technology Data Exchange (ETDEWEB)

    Long, Sara M., E-mail: hoskins@unimelb.edu.au [Centre for Aquatic Pollution, Identification and Management (CAPIM), School of BioSciences, Bio21 Molecular Science and Biotechnology Institute, The University of Melbourne, 30 Flemington Road, Parkville, 3052 (Australia); Tull, Dedreia L., E-mail: dedreia@unimelb.edu.au [Metabolomics Australia, Bio21 Molecular Science and Biotechnology Institute, 30 Flemington Road, Parkville, 3052 (Australia); Jeppe, Katherine J., E-mail: k.jeppe@unimelb.edu.au [Centre for Aquatic Pollution, Identification and Management (CAPIM), School of BioSciences, Bio21 Molecular Science and Biotechnology Institute, The University of Melbourne, 30 Flemington Road, Parkville, 3052 (Australia); Centre for Aquatic Pollution, Identification and Management (CAPIM), School of BioSciences, The University of Melbourne, 3010 (Australia); De Souza, David P., E-mail: desouzad@unimelb.edu.au [Metabolomics Australia, Bio21 Molecular Science and Biotechnology Institute, 30 Flemington Road, Parkville, 3052 (Australia); Dayalan, Saravanan, E-mail: sdayalan@unimelb.edu.au [Metabolomics Australia, Bio21 Molecular Science and Biotechnology Institute, 30 Flemington Road, Parkville, 3052 (Australia); Pettigrove, Vincent J., E-mail: vpet@unimelb.edu.au [Centre for Aquatic Pollution, Identification and Management (CAPIM), School of BioSciences, The University of Melbourne, 3010 (Australia); McConville, Malcolm J., E-mail: malcolmm@unimelb.edu.au [Metabolomics Australia, Bio21 Molecular Science and Biotechnology Institute, 30 Flemington Road, Parkville, 3052 (Australia); Hoffmann, Ary A., E-mail: ary@unimelb.edu.au [Centre for Aquatic Pollution, Identification and Management (CAPIM), School of BioSciences, Bio21 Molecular Science and Biotechnology Institute, The University of Melbourne, 30 Flemington Road, Parkville, 3052 (Australia); School of BioSciences, Bio21 Molecular Science and Biotechnology Institute, The University of Melbourne, 30 Flemington Road, Parkville, 3052 (Australia)

    2015-05-15

    Highlights: • An integrated metabolomics approach was applied to examine zinc exposure in midges. • Changes in carbohydrate and energy metabolism were observed using GC–MS. • Transsulfuration pathway is affected by zinc exposure. • Heavy metals other than zinc affect the transsulfuration pathways differently. - Abstract: Measuring biological responses in resident biota is a commonly used approach to monitoring polluted habitats. The challenge is to choose sensitive and, ideally, stressor-specific endpoints that reflect the responses of the ecosystem. Metabolomics is a potentially useful approach for identifying sensitive and consistent responses since it provides a holistic view to understanding the effects of exposure to chemicals upon the physiological functioning of organisms. In this study, we exposed the aquatic non-biting midge, Chironomus tepperi, to two concentrations of zinc chloride and measured global changes in polar metabolite levels using an untargeted gas chromatography–mass spectrometry (GC–MS) analysis and a targeted liquid chromatography–mass spectrometry (LC–MS) analysis of amine-containing metabolites. These data were correlated with changes in the expression of a number of target genes. Zinc exposure resulted in a reduction in levels of intermediates in carbohydrate metabolism (i.e., glucose 6-phosphate, fructose 6-phosphate and disaccharides) and an increase in a number of TCA cycle intermediates. Zinc exposure also resulted in decreases in concentrations of the amine containing metabolites, lanthionine, methionine and cystathionine, and an increase in metallothionein gene expression. Methionine and cystathionine are intermediates in the transsulfuration pathway which is involved in the conversion of methionine to cysteine. These responses provide an understanding of the pathways affected by zinc toxicity, and how these effects are different to other heavy metals such as cadmium and copper. The use of complementary

  9. Children's exposure to harmful elements in toys and low-cost jewelry: Characterizing risks and developing a comprehensive approach

    Energy Technology Data Exchange (ETDEWEB)

    Guney, Mert; Zagury, Gerald J., E-mail: gerald.zagury@polymtl.ca

    2014-04-01

    Highlights: • Risk for children up to 3 years-old was characterized considering oral exposure. • Saliva mobilization, ingestion of parts and of scraped-off material were considered. • Ingestion of parts caused hazard index (HI) values >>for Cd, Ni, and Pb exposure. • HI were lower (but > for saliva mobilization and <1 for scraped material ingestion. • Comprehensive approach aims to deal with drawbacks of current toy safety approaches. - Abstract: Contamination problem in jewelry and toys and children's exposure possibility have been previously demonstrated. For this study, risk from oral exposure has been characterized for highly contaminated metallic toys and jewelry ((MJ), n = 16) considering three scenarios. Total and bioaccessible concentrations of Cd, Cu, Ni, and Pb were high in selected MJ. First scenario (ingestion of parts or pieces) caused unacceptable risk for eight items for Cd, Ni, and/or Pb (hazard index (HI) > 1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always <1. Finally, saliva mobilization scenario caused HI > 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ.

  10. Integration of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry in blood culture diagnostics: a fast and effective approach.

    Science.gov (United States)

    Klein, Sabrina; Zimmermann, Stefan; Köhler, Christine; Mischnik, Alexander; Alle, Werner; Bode, Konrad A

    2012-03-01

    Sepsis is a major cause of mortality in hospitalized patients worldwide, with lethality rates ranging from 30 to 70 %. Sepsis is caused by a variety of different pathogens, and rapid diagnosis is of outstanding importance, as early and adequate antimicrobial therapy correlates with positive clinical outcome. In recent years, matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) fingerprinting has become a powerful tool in microbiological diagnostics. The direct identification of micro-organisms in a positive blood culture by MALDI-TOF MS can shorten the diagnostic procedure significantly. Therefore, the aim of the present study was to evaluate whether identification rates could be improved by using the new Sepsityper kit from Bruker Daltonics for direct isolation and identification of bacteria from positive blood cultures by MALDI-TOF MS compared with the use of conventional separator gel columns, and to integrate the MALDI-TOF MS-based identification method into the routine course of blood culture diagnostics in the setting of a microbiological laboratory at a university hospital in Germany. The identification of Gram-negative bacteria by MALDI-TOF MS was significantly better using the Sepsityper kit compared with a separator gel tube-based method (99 and 68 % correct identification, respectively). For Gram-positive bacteria, only 73 % were correctly identified by MALDI-TOF with the Sepsityper kit and 59 % with the separator gel tube assay. A major problem of both methods was the poor identification of Gram-positive grape-like clustered cocci. As differentiation of Staphylococcus aureus from coagulase-negative staphylococci is of clinical importance, a PCR was additionally established that was capable of identifying S. aureus directly from positive blood cultures, thus closing this diagnostic gap. Another benefit of the PCR approach is the possibility of directly detecting the genes responsible for meticillin

  11. An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure

    Energy Technology Data Exchange (ETDEWEB)

    Spaggiari, Dany, E-mail: dany.spaggiari@unige.ch [School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland); Daali, Youssef, E-mail: youssef.daali@hcuge.ch [Clinical Pharmacology and Toxicology Service, Geneva University Hospitals, Rue Gabrielle Perret-Gentil, 1211 Genève 14 (Switzerland); Rudaz, Serge, E-mail: serge.rudaz@unige.ch [School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland); Swiss Centre for Applied Human Toxicology, University of Geneva, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland)

    2016-07-01

    Acute exposure to environmental factors strongly affects the metabolic activity of cytochrome P450 (P450). As a consequence, the risk of interaction could be increased, modifying the clinical outcomes of a medication. Because toxic agents cannot be administered to humans for ethical reasons, in vitro approaches are therefore essential to evaluate their impact on P450 activities. In this work, an extensive cocktail mixture was developed and validated for in vitro P450 inhibition studies using human liver microsomes (HLM). The cocktail comprised eleven P450-specific probe substrates to simultaneously assess the activities of the following isoforms: 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 2E1, 2J2 and subfamily 3A. The high selectivity and sensitivity of the developed UHPLC-MS/MS method were critical for the success of this methodology, whose main advantages are: (i) the use of eleven probe substrates with minimized interactions, (ii) a low HLM concentration, (iii) fast incubation (5 min) and (iv) the use of metabolic ratios as microsomal P450 activities markers. This cocktail approach was successfully validated by comparing the obtained IC{sub 50} values for model inhibitors with those generated with the conventional single probe methods. Accordingly, reliable inhibition values could be generated 10-fold faster using a 10-fold smaller amount of HLM compared to individual assays. This approach was applied to assess the P450 inhibition potential of widespread insecticides, namely, chlorpyrifos, fenitrothion, methylparathion and profenofos. In all cases, P450 2B6 was the most affected with IC{sub 50} values in the nanomolar range. For the first time, mixtures of these four insecticides incubated at low concentrations showed a cumulative inhibitory in vitro effect on P450 2B6. - Highlights: • Ten P450 isoforms activities assessed simultaneously with only one incubation. • P450 activity levels measured using the metabolic ratio approach. • IC{sub 50} values generated 10

  12. An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure

    International Nuclear Information System (INIS)

    Spaggiari, Dany; Daali, Youssef; Rudaz, Serge

    2016-01-01

    Acute exposure to environmental factors strongly affects the metabolic activity of cytochrome P450 (P450). As a consequence, the risk of interaction could be increased, modifying the clinical outcomes of a medication. Because toxic agents cannot be administered to humans for ethical reasons, in vitro approaches are therefore essential to evaluate their impact on P450 activities. In this work, an extensive cocktail mixture was developed and validated for in vitro P450 inhibition studies using human liver microsomes (HLM). The cocktail comprised eleven P450-specific probe substrates to simultaneously assess the activities of the following isoforms: 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 2E1, 2J2 and subfamily 3A. The high selectivity and sensitivity of the developed UHPLC-MS/MS method were critical for the success of this methodology, whose main advantages are: (i) the use of eleven probe substrates with minimized interactions, (ii) a low HLM concentration, (iii) fast incubation (5 min) and (iv) the use of metabolic ratios as microsomal P450 activities markers. This cocktail approach was successfully validated by comparing the obtained IC 50 values for model inhibitors with those generated with the conventional single probe methods. Accordingly, reliable inhibition values could be generated 10-fold faster using a 10-fold smaller amount of HLM compared to individual assays. This approach was applied to assess the P450 inhibition potential of widespread insecticides, namely, chlorpyrifos, fenitrothion, methylparathion and profenofos. In all cases, P450 2B6 was the most affected with IC 50 values in the nanomolar range. For the first time, mixtures of these four insecticides incubated at low concentrations showed a cumulative inhibitory in vitro effect on P450 2B6. - Highlights: • Ten P450 isoforms activities assessed simultaneously with only one incubation. • P450 activity levels measured using the metabolic ratio approach. • IC 50 values generated 10-fold faster

  13. A Review of the Mechanism of Injury and Treatment Approaches for Illness Resulting from Exposure to Water-Damaged Buildings, Mold, and Mycotoxins

    Directory of Open Access Journals (Sweden)

    Janette Hope

    2013-01-01

    Full Text Available Physicians are increasingly being asked to diagnose and treat people made ill by exposure to water-damaged environments, mold, and mycotoxins. In addition to avoidance of further exposure to these environments and to items contaminated by these environments, a number of approaches have been used to help persons affected by exposure to restore their health. Illness results from a combination of factors present in water-damaged indoor environments including, mold spores and hyphal fragments, mycotoxins, bacteria, bacterial endotoxins, and cell wall components as well as other factors. Mechanisms of illness include inflammation, oxidative stress, toxicity, infection, allergy, and irritant effects of exposure. This paper reviews the scientific literature as it relates to commonly used treatments such as glutathione, antioxidants, antifungals, and sequestering agents such as Cholestyramine, charcoal, clay and chlorella, antioxidants, probiotics, and induced sweating.

  14. Dietary exposure to aflatoxin B-1, ochratoxin A and fuminisins of adults in Lao Cai province, Viet Nam: A total dietary study approach

    DEFF Research Database (Denmark)

    Bui, Huong Mai; Le Danh Tuyen; Do Huu Tuan

    2016-01-01

    Aflatoxins, fumonisins and ochratoxin A that contaminate various agricultural commodities are considered of significant toxicity and potent human carcinogens. This study took a total dietary study approach and estimated the dietary exposure of these mycotoxins for adults living in Lao Cai province...... higher than recommended provisional tolerable daily intake (PTDI) values mainly due to contaminated cereals and meat. The exposure to total fumonisins (1400 ng/kg bw/day) was typically lower than the PTDI value (2000 ng/kg bw/day). The estimated risk of liver cancer associated with exposure to aflatoxin...... B1 was 2.7 cases/100,000 person/year. Margin of exposure (MOE) of renal cancer linked to ochratoxin A and liver cancer associated with fumonisins were 1124 and 1954, respectively indicating risk levels of public health concern. Further studies are needed to evaluate the efficiency of technical...

  15. Level of Alkenylbenzenes in Parsley and Dill Based Teas and Associated Risk Assessment Using the Margin of Exposure Approach.

    Science.gov (United States)

    Alajlouni, Abdalmajeed M; Al-Malahmeh, Amer J; Isnaeni, Farida Nur; Wesseling, Sebastiaan; Vervoort, Jacques; Rietjens, Ivonne M C M

    2016-11-16

    Risk assessment of parsley and dill based teas that contain alkenylbenzenes was performed. To this end the estimated daily intake (EDI) of alkenylbenzenes resulting from use of the teas was quantified. Since most teas appeared to contain more than one alkenylbenzene, a combined risk assessment was performed based on equal potency of all alkenylbenzenes or using a so-called toxic equivalency (TEQ) approach through defining toxic equivalency factors (TEFs) for the different alkenylbenzenes. The EDI values resulting from consuming one cup of tea a day were 0.2-10.1 μg/kg bw for the individual alkenylbenzenes, 0.6-13.1 μg/kg bw for the sum of the alkenylbenzenes, and 0.3-10.7 μg safrole equiv/kg bw for the sum of alkenylbenzenes when expressed in safrole equivalents. The margin of exposure (MOE) values obtained were generally <10000, indicating a concern if the teas would be consumed on a daily basis over longer periods of time.

  16. A new approach towards biomarker selection in estimation of human exposure to chiral chemicals: a case study of mephedrone.

    Science.gov (United States)

    Castrignanò, Erika; Mardal, Marie; Rydevik, Axel; Miserez, Bram; Ramsey, John; Shine, Trevor; Pantoș, G Dan; Meyer, Markus R; Kasprzyk-Hordern, Barbara

    2017-11-02

    Wastewater-based epidemiology is an innovative approach to estimate public health status using biomarker analysis in wastewater. A new compound detected in wastewater can be a potential biomarker of an emerging trend in public health. However, it is currently difficult to select new biomarkers mainly due to limited human metabolism data. This manuscript presents a new framework, which enables the identification and selection of new biomarkers of human exposure to drugs with scarce or unknown human metabolism data. Mephedrone was targeted to elucidate the assessment of biomarkers for emerging drugs of abuse using a four-step analytical procedure. This framework consists of: (i) identification of possible metabolic biomarkers present in wastewater using an in-vivo study; (ii) verification of chiral signature of the target compound; (iii) confirmation of human metabolic residues in in-vivo/vitro studies and (iv) verification of stability of biomarkers in wastewater. Mephedrone was selected as a suitable biomarker due to its high stability profile in wastewater. Its enantiomeric profiling was studied for the first time in biological and environmental matrices, showing stereoselective metabolism of mephedrone in humans. Further biomarker candidates were also proposed for future investigation: 4'-carboxy-mephedrone, 4'-carboxy-normephedrone, 1-dihydro-mephedrone, 1-dihydro-normephedrone and 4'-hydroxy-normephedrone.

  17. The mediation proportion: a structural equation approach for estimating the proportion of exposure effect on outcome explained by an intermediate variable

    DEFF Research Database (Denmark)

    Ditlevsen, Susanne; Christensen, Ulla; Lynch, John

    2005-01-01

    It is often of interest to assess how much of the effect of an exposure on a response is mediated through an intermediate variable. However, systematic approaches are lacking, other than assessment of a surrogate marker for the endpoint of a clinical trial. We review a measure of "proportion...... of several intermediate variables. Binary or categorical variables can be included directly through threshold models. We call this measure the mediation proportion, that is, the part of an exposure effect on outcome explained by a third, intermediate variable. Two examples illustrate the approach. The first...... example is a randomized clinical trial of the effects of interferon-alpha on visual acuity in patients with age-related macular degeneration. In this example, the exposure, mediator and response are all binary. The second example is a common problem in social epidemiology-to find the proportion...

  18. Use of Threshold of Toxicological Concern (TTC) with High Throughput Exposure Predictions as a Risk-Based Screening Approach to Prioritize More Than Seven Thousand Chemicals (ASCCT)

    Science.gov (United States)

    Here, we present results of an approach for risk-based prioritization using the Threshold of Toxicological Concern (TTC) combined with high-throughput exposure (HTE) modelling. We started with 7968 chemicals with calculated population median oral daily intakes characterized by an...

  19. A tiered approach for integrating exposure and dosimetry with in vitro dose-response data in the modern risk assessment paradigm

    Science.gov (United States)

    High-throughput (HT) risk screening approaches apply in vitro dose-response data to estimate potential health risks that arise from exposure to chemicals. However, much uncertainty is inherent in relating bioactivities observed in an in vitro system to the perturbations of biolog...

  20. A margin of exposure approach to assessment of non-cancerous risk of diethyl phthalate based on human exposure from bottled water consumption.

    Science.gov (United States)

    Zare Jeddi, Maryam; Rastkari, Noushin; Ahmadkhaniha, Reza; Yunesian, Masud; Nabizadeh, Ramin; Daryabeygi, Reza

    2015-12-01

    Phthalates may be present in food due to their widespread presence as environmental contaminants or due to migration from food contact materials. Exposure to phthalates is considered to be potentially harmful to human health as well. Therefore, determining the main source of exposure is an important issue. So, the purpose of this study was (1) to measure the release of diethyl phthalate (DEP) in bottled water consumed in common storage conditions specially low temperature and freezing conditions; (2) to evaluate the intake of DEP from polyethylene terephthalate (PET) bottled water and health risk assessment; and (3) to assess the contribution of the bottled water to the DEP intake against the tolerable daily intake (TDI) values. DEP migration was investigated in six brands of PET-bottled water under different storage conditions room temperature, refrigerator temperature, freezing conditions (40 °C ,0 °C and -18 °C) and outdoor] at various time intervals by magnetic solid extraction (MSPE) using gas chromatography-mass spectroscopy (GC-MS). Eventually, a health risk assessment was conducted and the margin of exposure (MOE) was calculated. The results indicate that contact time with packaging and storage temperatures caused DEP to be released into water from PET bottles. But, when comprising the DEP concentration with initial level, the results demonstrated that the release of phthalates were not substantial in all storage conditions especially at low temperatures ( children > lactating women > teenagers > adults > pregnant women), but in all target groups, the MOE was much higher than 1000, thus, low risk is implied. Consequently, PET-bottled water is not a major source of human exposure to DEP and from this perspective is safe for consumption.

  1. Fixed, low radiant exposure vs. incremental radiant exposure approach for diode laser hair reduction: a randomized, split axilla, comparative single-blinded trial.

    Science.gov (United States)

    Pavlović, M D; Adamič, M; Nenadić, D

    2015-12-01

    Diode lasers are the most commonly used treatment modalities for unwanted hair reduction. Only a few controlled clinical trials but not a single randomized controlled trial (RCT) compared the impact of various laser parameters, especially radiant exposure, onto efficacy, tolerability and safety of laser hair reduction. To compare the safety, tolerability and mid-term efficacy of fixed, low and incremental radiant exposures of diode lasers (800 nm) for axillary hair removal, we conducted an intrapatient, left-to-right, patient- and assessor-blinded and controlled trial. Diode laser (800 nm) treatments were evaluated in 39 study participants (skin type II-III) with unwanted axillary hairs. Randomization and allocation to split axilla treatments were carried out by a web-based randomization tool. Six treatments were performed at 4- to 6-week intervals with study subjects blinded to the type of treatment. Final assessment of hair reduction was conducted 6 months after the last treatment by means of blinded 4-point clinical scale using photographs. The primary endpoint was reduction in hair growth, and secondary endpoints were patient-rated tolerability and satisfaction with the treatment, treatment-related pain and adverse effects. Excellent reduction in axillary hairs (≥ 76%) at 6-month follow-up visit after receiving fixed, low and incremental radiant exposure diode laser treatments was obtained in 59% and 67% of study participants respectively (Z value: 1.342, P = 0.180). Patients reported lower visual analogue scale (VAS) pain score on the fixed (4.26) than on the incremental radiant exposure side (5.64) (P diode laser treatments were less painful and better tolerated. © 2015 European Academy of Dermatology and Venereology.

  2. Changes in the Relative Balance of Approach and Avoidance Inclinations to Use Alcohol Following Cue Exposure Vary in Low and High Risk Drinkers

    Directory of Open Access Journals (Sweden)

    Ross C. Hollett

    2017-05-01

    Full Text Available According to the ambivalence model of craving, alcohol craving involves the dynamic interplay of separate approach and avoidance inclinations. Cue-elicited increases in approach inclinations are posited to be more likely to result in alcohol consumption and risky drinking behaviors only if unimpeded by restraint inclinations. Current study aims were (1 to test if changes in the net balance between approach and avoidance inclinations following alcohol cue exposure differentiate between low and high risk drinkers, and (2 if this balance is associated with alcohol consumption on a subsequent taste test. In two experiments (N = 60; N = 79, low and high risk social drinkers were exposed to alcohol cues, and pre- and post- approach and avoidance inclinations measured. An ad libitum alcohol consumption paradigm and a non-alcohol exposure condition were also included in Study 2. Cue-elicited craving was characterized by a predominant approach inclination only in the high risk drinkers. Conversely, approach inclinations were adaptively balanced by equally strong avoidance inclinations when cue-elicited craving was induced in low risk drinkers. For these low risk drinkers with the balanced craving profile, neither approach or avoidance inclinations predicted subsequent alcohol consumption levels during the taste test. Conversely, for high risk drinkers, where the approach inclination predominated, each inclination synergistically predicted subsequent drinking levels during the taste test. In conclusion, results support the importance of assessing both approach and avoidance inclinations, and their relative balance following alcohol cue exposure. Specifically, this more comprehensive assessment reveals changes in craving profiles that are not apparent from examining changes in approach inclinations alone, and it is this shift in the net balance that distinguishes high from low risk drinkers.

  3. A new and efficient Solid Phase Microextraction approach for analysis of high fat content food samples using a matrix-compatible coating.

    Science.gov (United States)

    De Grazia, Selenia; Gionfriddo, Emanuela; Pawliszyn, Janusz

    2017-05-15

    The current work presents the optimization of a protocol enabling direct extraction of avocado samples by a new Solid Phase Microextraction matrix compatible coating. In order to further extend the coating life time, pre-desorption and post-desorption washing steps were optimized for solvent type, time, and degree of agitation employed. Using optimized conditions, lifetime profiles of the coating related to extraction of a group of analytes bearing different physical-chemical properties were obtained. Over 80 successive extractions were carried out to establish coating efficiency using PDMS/DVB 65µm commercial coating in comparison with the PDMS/DVB/PDMS. The PDMS/DVB coating was more prone to irreversible matrix attachment on its surface, with consequent reduction of its extractive performance after 80 consecutive extractions. Conversely, the PDMS/DVB/PDMS coating showed enhanced inertness towards matrix fouling due to its outer smooth PDMS layer. This work represents the first step towards the development of robust SPME methods for quantification of contaminants in avocado as well as other fatty-based matrices, with minimal sample pre-treatment prior to extraction. In addition, an evaluation of matrix components attachment on the coating surface and related artifacts created by desorption of the coating at high temperatures in the GC-injector port, has been performed by GCxGC-ToF/MS. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Lung cancer risk assessment due to traffic-generated particles exposure in urban street canyons: A numerical modelling approach.

    Science.gov (United States)

    Scungio, M; Stabile, L; Rizza, V; Pacitto, A; Russi, A; Buonanno, G

    2018-08-01

    Combustion-generated nanoparticles are responsible for negative health effects due to their ability to penetrate in the lungs, carrying toxic compounds with them. In urban areas, the coexistence of nanoparticle sources and particular street-building configurations can lead to very high particle exposure levels. In the present paper, an innovative approach for the evaluation of lung cancer incidence in street canyon due to exposure to traffic-generated particles was proposed. To this end, the literature-available values of particulate matter, PAHs and heavy metals emitted from different kind of vehicles were used to calculate the Excess Lifetime Cancer Risk (ELCR) at the tailpipe. The estimated ELCR was then used as input data in a numerical CFD (Computational Fluid Dynamics) model that solves the mass, momentum, turbulence and species transport equations, in order to evaluate the cancer risk in every point of interest inside the street canyon. Thus, the influence of wind speed and street canyon geometry (H/W, height of building, H and width of the street, W) on the ELCR at street level was evaluated by means of a CFD simulation. It was found that the ELCR calculated on the leeward and windward sides of the street canyon at a breathable height of 1.5 m, for people exposed 15 min per day for 20 years, is equal to 1.5 × 10 -5 and 4.8 × 10 -6 , respectively, for wind speed of 1 m/s and H/W equal to 1. The ELCR at street level results higher on the leeward side for aspect ratios equal to 1 and 3, while for aspect ratio equal to 2 it is higher on the windward side. In addition, the simulations showed that with the increasing of wind speed the ELCR becomes lower everywhere in the street canyon, due to the increased in dispersion. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R., E-mail: asolekar@iitb.ac.in

    2012-11-01

    yards in India. -- Highlights: Black-Right-Pointing-Pointer Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. Black-Right-Pointing-Pointer Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). Black-Right-Pointing-Pointer Mathematical model using vector addition approach and based on Gaussian dispersion. Black-Right-Pointing-Pointer Model predicted maximum emissions of heavy metals at different wind speeds. Black-Right-Pointing-Pointer Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  6. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    International Nuclear Information System (INIS)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R.

    2012-01-01

    : ► Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. ► Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). ► Mathematical model using vector addition approach and based on Gaussian dispersion. ► Model predicted maximum emissions of heavy metals at different wind speeds. ► Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  7. A decision tree approach to screen drinking water contaminants for multiroute exposure potential in developing guideline values.

    Science.gov (United States)

    Krishnan, Kannan; Carrier, Richard

    2017-07-03

    The consideration of inhalation and dermal routes of exposures in developing guideline values for drinking water contaminants is important. However, there is no guidance for determining the eligibility of a drinking water contaminant for its multiroute exposure potential. The objective of the present study was to develop a 4-step framework to screen chemicals for their dermal and inhalation exposure potential in the process of developing guideline values. The proposed framework emphasizes the importance of considering basic physicochemical properties prior to detailed assessment of dermal and inhalation routes of exposure to drinking water contaminants in setting guideline values.

  8. Dietary exposure to aflatoxin B1, ochratoxin A and fuminisins of adults in Lao Cai province, Viet Nam: A total dietary study approach.

    Science.gov (United States)

    Huong, Bui Thi Mai; Tuyen, Le Danh; Tuan, Do Huu; Brimer, Leon; Dalsgaard, Anders

    2016-12-01

    Aflatoxins, fumonisins and ochratoxin A that contaminate various agricultural commodities are considered of significant toxicity and potent human carcinogens. This study took a total dietary study approach and estimated the dietary exposure of these mycotoxins for adults living in Lao Cai province, Vietnam. A total of 42 composite food samples representing 1134 individual food samples were prepared according to normal household practices and analysed for the three mycotoxins. Results showed that the dietary exposure to aflatoxin B1 (39.4 ng/kg bw/day) and ochratoxin A (18.7 ng/kg bw/day) were much higher than recommended provisional tolerable daily intake (PTDI) values mainly due to contaminated cereals and meat. The exposure to total fumonisins (1400 ng/kg bw/day) was typically lower than the PTDI value (2000 ng/kg bw/day). The estimated risk of liver cancer associated with exposure to aflatoxin B1 was 2.7 cases/100,000 person/year. Margin of exposure (MOE) of renal cancer linked to ochratoxin A and liver cancer associated with fumonisins were 1124 and 1954, respectively indicating risk levels of public health concern. Further studies are needed to evaluate the efficiency of technical solutions which could reduce mycotoxin contamination as well as to determine the health effects of the co-exposure to different types of mycotoxins. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hybrid Air Quality Modeling Approach For Use in the Near-Road Exposures to Urban Air Pollutant Study (NEXUS)

    Science.gov (United States)

    The Near-road EXposures to Urban air pollutant Study (NEXUS) investigated whether children with asthma living in close proximity to major roadways in Detroit, MI, (particularly near roadways with high diesel traffic) have greater health impacts associated with exposure to air pol...

  10. The radon issue: Considerations on regulatory approaches and exposure evaluations on the basis of recent epidemiological results

    International Nuclear Information System (INIS)

    Bochicchio, Francesco

    2008-01-01

    Recent epidemiological results have shown consistent statistically significant increases of lung cancer risk due to exposure to radon in dwellings at moderate levels of exposure, and a strong synergism with cigarette smoking. These results are summarized and discussed in relation to their possible implications for the regulatory control of radon and for future policies for the control of radon risk

  11. An exposure-based framework for grouping pollutants for a cumulative risk assessment approach: case study of indoor semi-volatile organic compounds.

    Science.gov (United States)

    Fournier, Kevin; Glorennec, Philippe; Bonvallot, Nathalie

    2014-04-01

    Humans are exposed to a large number of contaminants, many of which may have similar health effects. This paper presents a framework for identifying pollutants to be included in a cumulative risk assessment approach. To account for the possibility of simultaneous exposure to chemicals with common toxic modes of action, the first step of the traditional risk assessment process, i.e. hazard identification, is structured in three sub-steps: (1a) Identification of pollutants people are exposed to, (1b) identification of effects and mechanisms of action of these pollutants, (1c) grouping of pollutants according to similarity of their mechanism of action and health effects. Based on this exposure-based grouping we can derive "multi-pollutant" toxicity reference values, in the "dose-response assessment" step. The approach proposed in this work is original in that it is based on real exposures instead of a limited number of pollutants from a unique chemical family, as traditionally performed. This framework is illustrated by the case study of semi-volatile organic compounds in French dwellings, providing insights into practical considerations regarding the accuracy of the available toxicological information. This case study illustrates the value of the exposure-based approach as opposed to the traditional cumulative framework, in which chemicals with similar health effects were not always included in the same chemical class. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Issues and approaches for ensuring effective communication on acceptable daily exposure (ADE) values applied to pharmaceutical cleaning.

    Science.gov (United States)

    Olson, Michael J; Faria, Ellen C; Hayes, Eileen P; Jolly, Robert A; Barle, Ester Lovsin; Molnar, Lance R; Naumann, Bruce D; Pecquet, Alison M; Shipp, Bryan K; Sussman, Robert G; Weideman, Patricia A

    2016-08-01

    This manuscript centers on communication with key stakeholders of the concepts and program goals involved in the application of health-based pharmaceutical cleaning limits. Implementation of health-based cleaning limits, as distinct from other standards such as 1/1000th of the lowest clinical dose, is a concept recently introduced into regulatory domains. While there is a great deal of technical detail in the written framework underpinning the use of Acceptable Daily Exposures (ADEs) in cleaning (for example ISPE, 2010; Sargent et al., 2013), little is available to explain how to practically create a program which meets regulatory needs while also fulfilling good manufacturing practice (GMP) and other expectations. The lack of a harmonized approach for program implementation and communication across stakeholders can ultimately foster inappropriate application of these concepts. Thus, this period in time (2014-2017) could be considered transitional with respect to influencing best practice related to establishing health-based cleaning limits. Suggestions offered in this manuscript are intended to encourage full and accurate communication regarding both scientific and administrative elements of health-based ADE values used in pharmaceutical cleaning practice. This is a large and complex effort that requires: 1) clearly explaining key terms and definitions, 2) identification of stakeholders, 3) assessment of stakeholders' subject matter knowledge, 4) formulation of key messages fit to stakeholder needs, 5) identification of effective and timely means for communication, and 6) allocation of time, energy, and motivation for initiating and carrying through with communications. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Disruption of thyroid hormone functions by low dose exposure of tributyltin: an in vitro and in vivo approach.

    Science.gov (United States)

    Sharan, Shruti; Nikhil, Kumar; Roy, Partha

    2014-09-15

    Triorganotins, such as tributyltin chloride (TBTCl), are environmental contaminants that are commonly found in the antifouling paints used in ships and other vessels. The importance of TBTCl as an endocrine-disrupting chemical (EDC) in different animal models is well known; however, its adverse effects on the thyroid gland are less understood. Hence, in the present study, we aimed to evaluate the thyroid-disrupting effects of this chemical using both in vitro and in vivo approaches. We used HepG2 hepatocarcinoma cells for the in vitro studies, as they are a thyroid hormone receptor (TR)-positive and thyroid responsive cell line. For the in vivo studies, Swiss albino male mice were exposed to three doses of TBTCl (0.5, 5 and 50μg/kg/day) for 45days. TBTCl showed a hypo-thyroidal effect in vivo. Low-dose treatment of TBTCl exposure markedly decreased the serum thyroid hormone levels via the down-regulation of the thyroid peroxidase (TPO) and thyroglobulin (Tg) genes by 40% and 25%, respectively, while augmenting the thyroid stimulating hormone (TSH) levels. Thyroid-stimulating hormone receptor (TSHR) expression was up-regulated in the thyroid glands of treated mice by 6.6-fold relative to vehicle-treated mice (p<0.05). In the transient transactivation assays, TBTCl suppressed T3 mediated transcriptional activity in a dose-dependent manner. In addition, TBTCl was found to decrease the expression of TR. The present study thus indicates that low concentrations of TBTCl suppress TR transcription by disrupting the physiological concentrations of T3/T4, followed by the recruitment of NCoR to TR, providing a novel insight into the thyroid hormone-disrupting effects of this chemical. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Identification of candidate biomarkers of the exposure to PCBs in contaminated cattle: A gene expression- and proteomic-based approach.

    Science.gov (United States)

    Girolami, F; Badino, P; Spalenza, V; Manzini, L; Renzone, G; Salzano, A M; Dal Piaz, F; Scaloni, A; Rychen, G; Nebbia, C

    2018-05-28

    Dioxins and polychlorinated biphenyls (PCBs) are widespread and persistent contaminants. Through a combined gene expression/proteomic-based approach, candidate biomarkers of the exposure to such environmental pollutants in cattle subjected to a real eco-contamination event were identified. Animals were removed from the polluted area and fed a standard ration for 6 months. The decontamination was monitored by evaluating dioxin and PCB levels in pericaudal fat two weeks after the removal from the contaminated area (day 0) and then bimonthly for six months (days 59, 125 and 188). Gene expression measurements demonstrated that CYP1B1 expression was significantly higher in blood lymphocytes collected in contaminated animals (day 0), and decreased over time during decontamination. mRNA levels of interleukin 2 showed an opposite quantitative trend. MALDI-TOF-MS polypeptide profiling of serum samples ascertained a progressive decrease (from day 0 to 188) of serum levels of fibrinogen β-chain and serpin A3-7-like fragments, apolipoprotein (APO) C-II and serum amyloid A-4 protein, along with an augmented representation of transthyretin isoforms, as well as APOC-III and APOA-II proteins during decontamination. When differentially represented species were combined with serum antioxidant, acute phase and proinflammatory protein levels already ascertained in the same animals (Cigliano et al., 2016), bioinformatics unveiled an interaction network linking together almost all components. This suggests the occurrence of a complex PCB-responsive mechanism associated with animal contamination/decontamination, including a cohort of protein/polypeptide species involved in blood redox homeostasis, inflammation and lipid transport. All together, these results suggest the use in combination of such biomarkers for identifying PCB-contaminated animals, and for monitoring the restoring of their healthy condition following a decontamination process. Copyright © 2018 Elsevier B.V. All

  15. An Integrated Modeling Framework Forecasting Ecosystem Exposure-- A Systems Approach to the Cumulative Impacts of Multiple Stressors

    Science.gov (United States)

    Johnston, J. M.

    2013-12-01

    Freshwater habitats provide fishable, swimmable and drinkable resources and are a nexus of geophysical and biological processes. These processes in turn influence the persistence and sustainability of populations, communities and ecosystems. Climate change and landuse change encompass numerous stressors of potential exposure, including the introduction of toxic contaminants, invasive species, and disease in addition to physical drivers such as temperature and hydrologic regime. A systems approach that includes the scientific and technologic basis of assessing the health of ecosystems is needed to effectively protect human health and the environment. The Integrated Environmental Modeling Framework 'iemWatersheds' has been developed as a consistent and coherent means of forecasting the cumulative impact of co-occurring stressors. The Framework consists of three facilitating technologies: Data for Environmental Modeling (D4EM) that automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems (FRAMES) that manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) that provides post-processing and analysis of model outputs, including uncertainty and sensitivity analysis. Five models are linked within the Framework to provide multimedia simulation capabilities for hydrology and water quality processes: the Soil Water Assessment Tool (SWAT) predicts surface water and sediment runoff and associated contaminants; the Watershed Mercury Model (WMM) predicts mercury runoff and loading to streams; the Water quality Analysis and Simulation Program (WASP) predicts water quality within the stream channel; the Habitat Suitability Index (HSI) model scores physicochemical habitat quality for individual fish species; and the Bioaccumulation and Aquatic System Simulator (BASS) predicts fish growth, population dynamics and bioaccumulation

  16. Unified continuum damage model for matrix cracking in composite rotor blades

    Energy Technology Data Exchange (ETDEWEB)

    Pollayi, Hemaraju; Harursampath, Dineshkumar [Nonlinear Multifunctional Composites - Analysis and Design Lab (NMCAD Lab) Department of Aerospace Engineering Indian Institute of Science Bangalore - 560012, Karnataka (India)

    2015-03-10

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system under various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.

  17. Unified continuum damage model for matrix cracking in composite rotor blades

    International Nuclear Information System (INIS)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    2015-01-01

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system under various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load

  18. Random matrix theory

    CERN Document Server

    Deift, Percy

    2009-01-01

    This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive

  19. Matrix vector analysis

    CERN Document Server

    Eisenman, Richard L

    2005-01-01

    This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur

  20. An early approach for the evaluation of repair processes in fish after exposure to sediment contaminated by an oil spill.

    Science.gov (United States)

    Salamanca, Maria J; Jimenez-Tenorio, Natalia; Reguera, Diana F; Morales-Caselles, Carmen; Delvalls, T Angel

    2008-12-01

    A chronic bioassay was carried out under laboratory conditions using juvenile Solea senegalensis to determine the toxicity of contaminants from an oil spill(Prestige). Also, the repair processes in fish affected by contaminants due to oil exposure were evaluated. Over 30 days individuals were exposed to clean sediment (control) and to sediment contaminated by a mixture of polyaromatic hydrocarbons (PAHs) and other substances. The physicochemical parameters of the tanks (salinity, temperature, pH and dissolved oxygen) were controlled during the exposure period. Clean sediment from the Bay of Cadiz (Spain) was used as negative control and was mixed with fuel oil to prepare the dilution (0.5% w:w dry-weight). After the exposure period, fish were labeled and transferred to "clean tanks" (tanks without sediment) in order to study the recovery and the repair processes in the exposed organisms. A biomarker of exposure (ethoxyresorufin-O-deethylase activity - EROD activity) and a biomarker of effect (histopathology) were analyzed during the exposure and recovery period. After 10, 20 and 30 days of exposure, individuals showed significant induction (P tank", enabled a first evaluation of the repair process of the induced damages due to the fuel oil exposure. After the recovery phase, control individuals showed a more significant decrease (P repair processes probably need longer recovery periods to observe significant improvement of the affected organs. This will be further investigated in the future.

  1. Green's matrix for a second-order self-adjoint matrix differential operator

    International Nuclear Information System (INIS)

    Sisman, Tahsin Cagri; Tekin, Bayram

    2010-01-01

    A systematic construction of the Green's matrix for a second-order self-adjoint matrix differential operator from the linearly independent solutions of the corresponding homogeneous differential equation set is carried out. We follow the general approach of extracting the Green's matrix from the Green's matrix of the corresponding first-order system. This construction is required in the cases where the differential equation set cannot be turned to an algebraic equation set via transform techniques.

  2. Supersymmetry in random matrix theory

    International Nuclear Information System (INIS)

    Kieburg, Mario

    2010-01-01

    I study the applications of supersymmetry in random matrix theory. I generalize the supersymmetry method and develop three new approaches to calculate eigenvalue correlation functions. These correlation functions are averages over ratios of characteristic polynomials. In the first part of this thesis, I derive a relation between integrals over anti-commuting variables (Grassmann variables) and differential operators with respect to commuting variables. With this relation I rederive Cauchy- like integral theorems. As a new application I trace the supermatrix Bessel function back to a product of two ordinary matrix Bessel functions. In the second part, I apply the generalized Hubbard-Stratonovich transformation to arbitrary rotation invariant ensembles of real symmetric and Hermitian self-dual matrices. This extends the approach for unitarily rotation invariant matrix ensembles. For the k-point correlation functions I derive supersymmetric integral expressions in a unifying way. I prove the equivalence between the generalized Hubbard-Stratonovich transformation and the superbosonization formula. Moreover, I develop an alternative mapping from ordinary space to superspace. After comparing the results of this approach with the other two supersymmetry methods, I obtain explicit functional expressions for the probability densities in superspace. If the probability density of the matrix ensemble factorizes, then the generating functions exhibit determinantal and Pfaffian structures. For some matrix ensembles this was already shown with help of other approaches. I show that these structures appear by a purely algebraic manipulation. In this new approach I use structures naturally appearing in superspace. I derive determinantal and Pfaffian structures for three types of integrals without actually mapping onto superspace. These three types of integrals are quite general and, thus, they are applicable to a broad class of matrix ensembles. (orig.)

  3. Supersymmetry in random matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Kieburg, Mario

    2010-05-04

    I study the applications of supersymmetry in random matrix theory. I generalize the supersymmetry method and develop three new approaches to calculate eigenvalue correlation functions. These correlation functions are averages over ratios of characteristic polynomials. In the first part of this thesis, I derive a relation between integrals over anti-commuting variables (Grassmann variables) and differential operators with respect to commuting variables. With this relation I rederive Cauchy- like integral theorems. As a new application I trace the supermatrix Bessel function back to a product of two ordinary matrix Bessel functions. In the second part, I apply the generalized Hubbard-Stratonovich transformation to arbitrary rotation invariant ensembles of real symmetric and Hermitian self-dual matrices. This extends the approach for unitarily rotation invariant matrix ensembles. For the k-point correlation functions I derive supersymmetric integral expressions in a unifying way. I prove the equivalence between the generalized Hubbard-Stratonovich transformation and the superbosonization formula. Moreover, I develop an alternative mapping from ordinary space to superspace. After comparing the results of this approach with the other two supersymmetry methods, I obtain explicit functional expressions for the probability densities in superspace. If the probability density of the matrix ensemble factorizes, then the generating functions exhibit determinantal and Pfaffian structures. For some matrix ensembles this was already shown with help of other approaches. I show that these structures appear by a purely algebraic manipulation. In this new approach I use structures naturally appearing in superspace. I derive determinantal and Pfaffian structures for three types of integrals without actually mapping onto superspace. These three types of integrals are quite general and, thus, they are applicable to a broad class of matrix ensembles. (orig.)

  4. P-matrix description of charged particles interaction

    International Nuclear Information System (INIS)

    Babenko, V.A.; Petrov, N.M.

    1992-01-01

    The paper deals with formalism of the P-matrix description of two charged particles interaction. Separation in the explicit form of the background part corresponding to the purely Coulomb interaction in the P-matrix is proposed. Expressions for the purely Coulomb P-matrix, its poles, residues and purely Coulomb P-matrix approach eigenfunctions are obtained. (author). 12 refs

  5. Dietary exposure to endocrine disrupting chemicals in metropolitan population from China: a risk assessment based on probabilistic approach.

    Science.gov (United States)

    He, Dongliang; Ye, Xiaolei; Xiao, Yonghua; Zhao, Nana; Long, Jia; Zhang, Piwei; Fan, Ying; Ding, Shibin; Jin, Xin; Tian, Chong; Xu, Shunqing; Ying, Chenjiang

    2015-11-01

    The intake of contaminated foods is an important exposure pathway for endocrine disrupting chemicals (EDCs). However, data on the occurrence of EDCs in foodstuffs are sporadic and the resultant risk of co-exposure is rarely concerned. In this study, 450 food samples representing 7 food categories (mainly raw and fresh food), collected from three geographic cities in China, were analyzed for eight EDCs using high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS). Besides estrone (E1), other EDCs including diethylstilbestrol (DES), nonylphenol (NP), bisphenol A (BPA), octylphenol (OP), 17β-estradiol (E2), 17α-ethinylestradiol (EE2), and estriol (E3) were ubiquitous in food. Dose-dependent relationships were found between NP and EE2 (r=0.196, pRisk Assessment (MCRA) system. The 50th and 95th percentile exposure of any EDCs isomer were far below the tolerable daily intake (TDI) value identically. However, the sum of 17β-estradiol equivalents (∑EEQs) exposure in population was considerably larger than the value of exposure to E2, which implied the underlying resultant risk of multiple EDCs in food should be concern. In conclusion, co-exposure via food consumption should be considered rather than individual EDCs during health risk evaluation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The quasi experimental study of the influence of advertising creativity and exposure intensity toward buying action with aida approach

    Directory of Open Access Journals (Sweden)

    Ramdan Dede Budiawan

    2017-06-01

    Full Text Available Advertisement is one of marketing communication forms made by companies to reach sales goal of certain product. Advertising creativity is one of important factors that determines the success of television advertisement, beside that the exposure intensity also be determining factor to make the television advertisement get attention from spectators. To measure the spectators response toward advertisement in this research, the researcher used AIDA model. AIDA (attention, interest, desire, action model is one of popular response hierarchy models for marketer as guidance to implement the marketing communication activities. The thesis analyzes the influence of advertising creativity and exposure intensity toward the buying action as the final stage of consumer in deciding the decision to buy product. The research used quasi experimental study to 80 respondents as the target market of the advertised, that is ice cream of Haan brand. The experiment as done in 2 treatments, treatment 1: one advertising exposure and treatment 2: three advertising exposures. The Mann Whitney difference test with SPSS program showed no significant differences between treatment of 1 exposure and 3 exposures. The SEM PLS analysis showed that advertising creativity influenced significantly to attention, interest, desire and action in buying product.

  7. Independent assessment of matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) sample preparation quality: A novel statistical approach for quality scoring.