WorldWideScience

Sample records for theoretical analyse based

  1. Theoretical analyses of superconductivity in iron based ...

    African Journals Online (AJOL)

    This paper focuses on the theoretical analysis of superconductivity in iron based superconductor Ba1−xKxFe2As2. After reviewing the current findings on this system, we suggest that phononexciton combined mechanism gives a right order of superconducting transition temperature (TC) for Ba1−xKxFe2As2 . By developing ...

  2. Theoretical and Empirical Analyses of an Improved Harmony Search Algorithm Based on Differential Mutation Operator

    Directory of Open Access Journals (Sweden)

    Longquan Yong

    2012-01-01

    Full Text Available Harmony search (HS method is an emerging metaheuristic optimization algorithm. In this paper, an improved harmony search method based on differential mutation operator (IHSDE is proposed to deal with the optimization problems. Since the population diversity plays an important role in the behavior of evolution algorithm, the aim of this paper is to calculate the expected population mean and variance of IHSDE from theoretical viewpoint. Numerical results, compared with the HSDE, NGHS, show that the IHSDE method has good convergence property over a test-suite of well-known benchmark functions.

  3. Conformational determination of [Leu]enkephalin based on theoretical and experimental VA and VCD spectral analyses

    DEFF Research Database (Denmark)

    Abdali, Salim; Jalkanen, Karl J.; Cao, X.

    2004-01-01

    Conformational determination of [Leu]enkephalin in DMSO-d6 is carried out using VA and VCD spectral analyses. Conformational energies, vibrational frequencies and VA and VCD intensities are calculated using DFT at B3LYP/6-31G* level of theory. Comparison between the measured spectra...

  4. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    Science.gov (United States)

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples.

  5. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior

    OpenAIRE

    Hagger, Martin; Chan, Dervin K. C.; Protogerou, Cleo; Chatzisarantis, Nikos L. D.

    2016-01-01

    Objective Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs fr...

  6. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...

  7. Theoretical and computational analyses of LNG evaporator

    Science.gov (United States)

    Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong

    2017-04-01

    Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.

  8. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  9. Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior

    NARCIS (Netherlands)

    Keijzer, F.A.

    2005-01-01

    This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a

  10. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior.

    Science.gov (United States)

    Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D

    2016-08-01

    Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Towards a theoretical framework for analysing organisational processes

    DEFF Research Database (Denmark)

    Rocha, Robson Silva

    2003-01-01

    In this paper, I discuss how a theoretical framework can be build to analyse socialprocesses of transformation, making the link between macro and micro processes, inwhich this dichotomy can be overcome. The aim of this theoretical framework is toaccount for the transformation in societal...... characteristics and changes in actors' strategiesat micro level, in a way that links macro changes and micro processes - the cognitivestructures of the individual and social structures of the society. In order to build this framework, I draw from the figuration sociology of Norbert Elias, the praxeologia of...

  12. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  13. Theoretical Analyses of Superconductivity in Iron Based ...

    African Journals Online (AJOL)

    fire7-

    expulsion of magnetic field from the interior of a given superconducting material for temperatures below the critical ... replacing lanthanum by magnetic rare earth elements such as Ce, Sm, Nd or Pr and the critical temperature could be ... addition to a small anomaly in the dc magnetic susceptibility. Optical conductivity and.

  14. Theoretical Analyses of Superconductivity in Iron Based ...

    African Journals Online (AJOL)

    fire7-

    using double time temperature dependent Green's function formalism and a suitable decoupling approximation technique, we ... phenomenon of zero electric resistivity in mercury was soon followed by the observation of the superconducting state in ... The iron, Fe2+ forms tetrahedron within the layers. This means that, iron-.

  15. Theoretical analyses of the refractive implications of transepithelial PRK ablations.

    Science.gov (United States)

    Arba Mosquera, Samuel; Awwad, Shady T

    2013-07-01

    To analyse the refractive implications of single-step, transepithelial photorefractive keratectomy (TransPRK) ablations. A simulation for quantifying the refractive implications of TransPRK ablations has been developed. The simulation includes a simple modelling of corneal epithelial profiles, epithelial ablation profiles as well as refractive ablation profiles, and allows the analytical quantification of the refractive implications of TransPRK in terms of wasted tissue, achieved optical zone (OZ) and induced refractive error. Wasted tissue occurs whenever the actual corneal epithelial profile is thinner than the applied epithelial ablation profile, achieved OZ is reduced whenever the actual corneal epithelial profile is thicker than the applied epithelial ablation profile and additional refractive errors are induced whenever the actual difference centre-to-periphery in the corneal epithelial profile deviates from the difference in the applied epithelial ablation profile. The refractive implications of TransPRK ablations can be quantified using simple theoretical simulations. These implications can be wasted tissue (∼14 µm, if the corneal epithelial profile is thinner than the ablated one), reduced OZ (if the corneal epithelial profile is thicker than ablated one, very severe for low corrections) and additional refractive errors (∼0.66 D, if the centre-to-periphery progression of the corneal epithelial profile deviates from the progression of the ablated one). When TransPRK profiles are applied to normal, not previously treated, non-pathologic corneas, no specific refractive implications associated to the transepithelial profile can be anticipated; TransPRK would provide refractive outcomes equal to those of standard PRK. Adjustments for the planned OZ and, in the event of retreatments, for the target sphere can be easily derived.

  16. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  17. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  18. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  19. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  20. Historical Consciousness in Youth. Theoretical and Exemplary Empirical Analyses

    Directory of Open Access Journals (Sweden)

    Carlos Kölbl

    2001-09-01

    justified and to reflect this is strongly developed in the young persons we analysed. Their thinking proves to be specifically "modern" also in other respects. In addition to the description of the historical knowledge in content and the historical interests of the young people, this finding is described in detail. Finally it is discussed to which degree the central finding can be applied against the widespread lamentation of an alleged poor historical consciousness of pupils. URN: urn:nbn:de:0114-fqs010397

  1. High-level radioactive waste disposal type and theoretical analyses

    International Nuclear Information System (INIS)

    Lu Yingfa; Wu Yanchun; Luo Xianqi; Cui Yujun

    2006-01-01

    Study of high-level radioactive waste disposal is necessary for the nuclear electrical development; the determination of nuclear waste depository type is one of importance safety. Based on the high-level radioactive disposal type, the relative research subjects are proposed, then the fundamental research characteristics of nuclear waste disposition, for instance: mechanical and hydraulic properties of rock mass, saturated and unsaturated seepage, chemical behaviors, behavior of special soil, and gas behavior, etc. are introduced, the relative coupling equations are suggested, and a one dimensional result is proposed. (authors)

  2. Theoretical and Computational Analyses of Bernoulli Levitation Flows

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Jong Soon; Kim, Gyu Wan; Kim, Jin Hyeon; Kim, Heuy Dong [Andong Nat' l Univ., Andong (Korea, Republic of)

    2013-07-15

    Pneumatic levitation is based upon Bernoulli's principle. However, this method is known to require a large gas flow rate that can lead to an increase in the cost of products. In this case, the gas flow rate should be increased, and the compressible effects of the gas may be of practical importance. In the present study, a computational fluid dynamics method has been used to obtain insights into Bernoulli levitation flows. Three-dimensional compressible Navier-Stokes equations in combination with the SST k-{omega} turbulence model were solved using a fully implicit finite volume scheme. The gas flow rate, work piece diameter,and clearance gap between the work piece and the circular cylinder were varied to investigate the flow characteristics inside. It is known that there is an optimal clearance gap for the lifting force and that increasing the supply gas flow rate results in a larger lifting force.

  3. Theoretical and Computational Analyses of Bernoulli Levitation Flows

    International Nuclear Information System (INIS)

    Nam, Jong Soon; Kim, Gyu Wan; Kim, Jin Hyeon; Kim, Heuy Dong

    2013-01-01

    Pneumatic levitation is based upon Bernoulli's principle. However, this method is known to require a large gas flow rate that can lead to an increase in the cost of products. In this case, the gas flow rate should be increased, and the compressible effects of the gas may be of practical importance. In the present study, a computational fluid dynamics method has been used to obtain insights into Bernoulli levitation flows. Three-dimensional compressible Navier-Stokes equations in combination with the SST k-ω turbulence model were solved using a fully implicit finite volume scheme. The gas flow rate, work piece diameter,and clearance gap between the work piece and the circular cylinder were varied to investigate the flow characteristics inside. It is known that there is an optimal clearance gap for the lifting force and that increasing the supply gas flow rate results in a larger lifting force

  4. Using a Theoretical Framework of Institutional Culture to Analyse an Institutional Strategy Document

    Science.gov (United States)

    Jacobs, Anthea Hydi Maxine

    2016-01-01

    This paper builds on a conceptual analysis of institutional culture in higher education. A theoretical framework was proposed to analyse institutional documents of two higher education institutions in the Western Cape, for the period 2002 to 2012 (Jacobs 2012). The elements of this theoretical framework are "shared values and beliefs",…

  5. PC based uranium enrichment analyser

    International Nuclear Information System (INIS)

    Madan, V.K.; Gopalakrishana, K.R.; Bairi, B.R.

    1991-01-01

    It is important to measure enrichment of unirradiated nuclear fuel elements during production as a quality control measure. An IBM PC based system has recently been tested for enrichment measurements for Nuclear Fuel Complex (NFC), Hyderabad. As required by NFC, the system has ease of calibration. It is easy to switch the system from measuring enrichment of fuel elements to pellets and also automatically store the data and the results. The system uses an IBM PC plug in card to acquire data. The card incorporates programmable interval timers (8253-5). The counter/timer devices are executed by I/O mapped I/O's. A novel algorithm has been incorporated to make the system more reliable. The application software has been written in BASIC. (author). 9 refs., 1 fig

  6. Analysing Theoretical Frameworks of Moral Education through Lakatos's Philosophy of Science

    Science.gov (United States)

    Han, Hyemin

    2014-01-01

    The structure of studies of moral education is basically interdisciplinary; it includes moral philosophy, psychology, and educational research. This article systematically analyses the structure of studies of moral educational from the vantage points of philosophy of science. Among the various theoretical frameworks in the field of philosophy of…

  7. Analysing Everyday Online Political Talk in China: Theoretical and Methodological Reflections

    NARCIS (Netherlands)

    Wright, Scott; Graham, Todd; Sun, Yu; Yang Wang, Wilfred; Luo, Xiantian; Carson, Andrea

    2016-01-01

    This article explores the theoretical and methodological challenges of collecting and analysing everyday online political talk in China, and our approach to defining and coding such talk. In so doing, the article is designed to encourage further research in this area, taking forward a new agenda for

  8. Experimental and theoretical analyses of package-on-package structure under three-point bending loading

    International Nuclear Information System (INIS)

    Jia Su; Wang Xi-Shu; Ren Huai-Hui

    2012-01-01

    High density packaging is developing toward miniaturization and integration, which causes many difficulties in designing, manufacturing, and reliability testing. Package-on-Package (PoP) is a promising three-dimensional high-density packaging method that integrates a chip scale package (CSP) in the top package and a fine-pitch ball grid array (FBGA) in the bottom package. In this paper, in-situ scanning electron microscopy (SEM) observation is carried out to detect the deformation and damage of the PoP structure under three-point bending loading. The results indicate that the cracks occur in the die of the top package, then cause the crack deflection and bridging in the die attaching layer. Furthermore, the mechanical principles are used to analyse the cracking process of the PoP structure based on the multi-layer laminating hypothesis and the theoretical analysis results are found to be in good agreement with the experimental results. (condensed matter: structural, mechanical, and thermal properties)

  9. Analysing the differences between theoretical and implemented supply chain strategies in selected organisations

    OpenAIRE

    Danie J. Nel; Johanna A. Badenhorst-Weiss

    2011-01-01

    Organisations can use supply chain strategies to gain a competitive advantage for the supply chain. A competitive advantage can be achieved by means of low cost or by means of differentiation. However, organisations have to implement the correct supply chain strategy. Returns on investment can be compromised if organisations implement an incorrect supply chain strategy. The objective of the article is to analyse the differences between theoretically implied and implemented supply chain strate...

  10. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  12. Analysing Buyers' and Sellers' Strategic Interactions in Marketplaces: An Evolutionary Game Theoretic Approach

    Science.gov (United States)

    Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.

    We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.

  13. IASI's sensitivity to near-surface carbon monoxide (CO): Theoretical analyses and retrievals on test cases

    Science.gov (United States)

    Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2017-03-01

    Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.

  14. Energy and exergy analyses of Photovoltaic/Thermal flat transpired collectors: Experimental and theoretical study

    International Nuclear Information System (INIS)

    Gholampour, Maysam; Ameri, Mehran

    2016-01-01

    Highlights: • A Photovoltaic/Thermal flat transpired collector was theoretically and experimentally studied. • Performance of PV/Thermal flat transpired plate was evaluated using equivalent thermal, first, and second law efficiencies. • According to the actual exergy gain, a critical radiation level was defined and its effect was investigated. • As an appropriate tool, equivalent thermal efficiency was used to find optimum suction velocity and PV coverage percent. - Abstract: PV/Thermal flat transpired plate is a kind of air-based hybrid Photovoltaic/Thermal (PV/T) system concurrently producing both thermal and electrical energy. In order to develop a predictive model, validate, and investigate the PV/Thermal flat transpired plate capabilities, a prototype was fabricated and tested under outdoor conditions at Shahid Bahonar University of Kerman in Kerman, Iran. In order to develop a mathematical model, correlations for Nusselt numbers for PV panel and transpired plate were derived using CFD technique. Good agreement was obtained between measured and simulated values, with the maximum relative root mean square percent deviation (RMSE) being 9.13% and minimum correlation coefficient (R-squared) 0.92. Based on the critical radiation level defined in terms of the actual exergy gain, it was found that with proper fan and MPPT devices, there is no concern about the critical radiation level. To provide a guideline for designers, using equivalent thermal efficiency as an appropriate tool, optimum values for suction velocity and PV coverage percent under different conditions were obtained.

  15. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  16. Theoretical Semi-Empirical AM1 studies of Schiff Bases

    International Nuclear Information System (INIS)

    Arora, K.; Burman, K.

    2005-01-01

    The present communication reports the theoretical semi-empirical studies of schiff bases of 2-amino pyridine along with their comparison with their parent compounds. Theoretical studies reveal that it is the azomethine group, in the schiff bases under study, that acts as site for coordination to metals as it is reported by many coordination chemists. (author)

  17. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  18. Developing a theoretical framework for complex community-based interventions.

    Science.gov (United States)

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  19. A Critique of the Meta-theoretical Explanations and Analyses of the ...

    African Journals Online (AJOL)

    conditions for the Stimulation and Attraction of Foreign Direct Investments. ... these explanations and analyses are able to sufficiently account for, and capture the critical forces, processes and factors that tend to shape the movement of capital globally.

  20. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  1. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  2. Self-condensation of n-(N-propyl)butanimine: NMR and mass spectral analyses and investigation by theoretical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Manfrini, Rozangela Magalhaes; Teixeira, Flavia Rodrigues; Pilo-Veloso, Dorila; Alcantara, Antonio Flavio de Carvalho, E-mail: aalcantara@zeus.qui.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Inst. de Ciencias Exatas. Dept. de Quimica; Nelson, David Lee [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Farmacia. Dept. de Quimica; Siqueira, Ezequias Pessoa de [Centro de Pesquisas Rene Rachou (FIOCRUZ), Belo Horizonte, MG (Brazil)

    2012-07-01

    The stability of N-propylbutanimine (1) was investigated under different experimental conditions. The acid-catalyzed self-condensation that produced the E-enimine (4) and Z-inimine (5) was studied by experimental analyses and theoretical calculations. Since the calculations for the energy of 5 indicated that it had a lower energy than 4, yet 4 was the principal product, the self-condensation of 1 must be kinetically controlled. (author)

  3. Experimental and theoretical analyses of temperature polarization effect in vacuum membrane distillation

    KAUST Repository

    Alsaadi, Ahmad Salem; Francis, Lijo; Amy, Gary L.; Ghaffour, NorEddine

    2014-01-01

    This paper discusses the effect of temperature polarization in Vacuum Membrane Distillation (VMD). The main motivation for using VMD in this work is that this module configuration is much simpler and more suitable for this kind of investigation than the other MD configurations such as Direct Contact Membrane Distillation (DCMD). The coupling between heat and mass transfer mechanisms at the feed-membrane interface is presented from a theoretical point of view. In addition, a new simple graphical method and a mathematical model for determining VMD flux are presented. The two methods used in evaluating the extent of temperature polarization effect on water vapor flux (flux sensitivity factors and temperature polarization coefficient (TPC)) are also analyzed and compared. The effect of integrating a heat recovery system in a large scale module on the TPC coefficient has also been studied and presented in this paper. © 2014 Elsevier B.V.

  4. Theoretical analyses of an injection-locked diode-pumped rubidium vapor laser.

    Science.gov (United States)

    Cai, He; Gao, Chunqing; Liu, Xiaoxu; Wang, Shunyan; Yu, Hang; Rong, Kepeng; An, Guofei; Han, Juhong; Zhang, Wei; Wang, Hongyuan; Wang, You

    2018-04-02

    Diode-pumped alkali lasers (DPALs) have drawn much attention since they were proposed in 2001. The narrow-linewidth DPAL can be potentially applied in the fields of coherent communication, laser radar, and atomic spectroscopy. In this study, we propose a novel protocol to narrow the width of one kind of DPAL, diode-pumped rubidium vapor laser (DPRVL), by use of an injection locking technique. A kinetic model is first set up for an injection-locked DPRVL with the end-pumped configuration. The laser tunable duration is also analyzed for a continuous wave (CW) injection-locked DPRVL system. Then, the influences of the pump power, power of a master laser, and reflectance of an output coupler on the output performance are theoretically analyzed. The study should be useful for design of a narrow-linewidth DPAL with the relatively high output.

  5. Experimental and theoretical analyses of temperature polarization effect in vacuum membrane distillation

    KAUST Repository

    Alsaadi, Ahmad Salem

    2014-08-13

    This paper discusses the effect of temperature polarization in Vacuum Membrane Distillation (VMD). The main motivation for using VMD in this work is that this module configuration is much simpler and more suitable for this kind of investigation than the other MD configurations such as Direct Contact Membrane Distillation (DCMD). The coupling between heat and mass transfer mechanisms at the feed-membrane interface is presented from a theoretical point of view. In addition, a new simple graphical method and a mathematical model for determining VMD flux are presented. The two methods used in evaluating the extent of temperature polarization effect on water vapor flux (flux sensitivity factors and temperature polarization coefficient (TPC)) are also analyzed and compared. The effect of integrating a heat recovery system in a large scale module on the TPC coefficient has also been studied and presented in this paper. © 2014 Elsevier B.V.

  6. Auction-theoretic analyses of the first offshore wind energy auction in Germany

    Science.gov (United States)

    Kreiss, J.; Ehrhart, K.-M.; Hanke, A.-K.

    2017-11-01

    The first offshore wind energy auction in Germany led to a striking result. The average award price was 0.44 ct/kWh and even more interesting, 3 out of 4 awarded projects had a strike price of 0.0 ct/kWh. That implies that those projects will only receive the actual wholesale market price for electricity as revenue. Although there has been a strong decline in costs of offshore wind projects, such a result is still surprising. We analyzed this result auction-theoretically and showed how the auction design and the market environment can explain part of the outcome. However, another aspect of the explanation is the high risk that the awarded bidders take regarding the future development of both the project costs and the wholesale market price.

  7. Theoretical and practical bases of transfer pricing formation at the microlevel in terms of national economy

    OpenAIRE

    Oksana Desyatniuk; Olga Cherevko

    2015-01-01

    The theoretical and methodological bases of transfer pricing formation at microlevel are studied. The factors acting upon transfer pricing are analysed and the algorithm to form transfer price at an enterprise is suggested. The model example to choose the method of transfer pricing and calculate the profitability interval meeting modern legal requirements is considered.

  8. Theoretical study for a digital transfer function analyser; Etude theorique pour un transferometre digital

    Energy Technology Data Exchange (ETDEWEB)

    Freycenon, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-07-01

    This study deals with the harmonic analysis of the instantaneous counting rate of a pulse train. This arises from using a fission chamber for reactivity to power transfer function measurements by oscillation methods in reactors. The systematical errors due to the sampling process are computed. The integration carried out when sampling the signal modifies the formulae of the Nyquist theorem on spectrum folding. The statistical errors due to the noise are analysed: it is shown that the bandwidth of the spectral window applied to the noise frequency spectrum is equal to the inverse of the time duration of the experiment. A dead time of 25 per cent of the sampling time does not increase appreciably the bandwidth. A new method is proposed afterwards yielding very approximate results of the Fourier analysis during the experiment. The systematical errors arising from the measuring process are determined, and it is shown that the bandwidth of the corresponding spectral window is still given by the inverse of the time duration of the experiment. (author) [French] Cette etude se rapporte a l'analyse harmonique de la valeur instantanee du taux de comptage d'une suite d'impulsions. On rencontre ce probleme dans l'utilisation de chambres a fission pour les mesures de fonction de transfert reactivite-puissance par la methode d'oscillation dans les piles. On calcule l'erreur systematique due au processus d'echantillonnage ou l'integration operee modifie les formules classiques de recouvrement du spectre. On analyse ensuite les erreurs statistiques dues au bruit de fond. On montre que la largeur de bande de la fenetre spectrale appliquee au spectre de puissance du bruit est donnee par l'inverse du temps de mesure. Un temps mort de 25 pour cent du temps de prelevement n'accroit pas sensiblement cette largeur de bande. On propose ensuite un procede simple qui permet d'obtenir, en cours d'experience, des resultats tres approches de l'analyse de Fourier. On determine les erreurs

  9. Theoretical analyses of a 1.617-μm laser with a MOPA configuration

    Science.gov (United States)

    Cai, He; Han, Juhong; Wang, You; Rong, Kepeng; Yu, Hang; Wang, Shunyan; An, Guofei; Zhang, Wei; Wu, Peng; Yu, Qiang; Wang, Hongyuan

    2018-01-01

    In the recent years, lasers around 1.6 μm are attracted much attention since their wavelengths fit the atmospheric transmission window and can be used for applications in a range of fields including laser radar, gas sensing, and free-space communications. As one of the lasing wavelengths of an Er:YAG medium is just located in the 1.6 μm region, such a laser has been gaining more and more extensive applications in the near infrared. Until now, rare literatures have been found in the MOPA (Master Oscillator Power Amplifier) study of a 1.617 μm Er:YAG laser because the effect of upconversion will become greater while a higher doping concentration is adopted. In this study, we theoretically analyze the amplification features of a 1.617 μm Er:YAG seed laser by using a multiple MOPA configuration. In the simulation, a kinetic model is established to investigate how the doping concentration, crystal length, and pump power affect the amplification efficiency of a seed laser. The results would be helpful to construct a feasible 1.617 μm laser system.

  10. Theoretical bases analysis of scientific prediction on marketing principles

    OpenAIRE

    A.S. Rosohata

    2012-01-01

    The article presents an overview categorical apparatus of scientific predictions and theoretical foundations results of scientific forecasting. They are integral part of effective management of economic activities. The approaches to the prediction of scientists in different fields of Social science and the categories modification of scientific prediction, based on principles of marketing are proposed.

  11. Theoretical bases on thermal stability of layered metallic systems

    International Nuclear Information System (INIS)

    Kadyrzhanov, K.K.; Rusakov, V.S.; Turkebaev, T.Eh.; Zhankadamova, A.M.; Ensebaeva, M.Z.

    2003-01-01

    The paper is dedicated to implementation of the theoretical bases for layered metallic systems thermal stabilization. The theory is based on the stabilization mechanism expense of the intermediate two-phase field formation. As parameters of calculated model are coefficients of mutual diffusion and inclusions sizes of generated phases in two-phase fields. The stabilization time dependence for beryllium-iron (Be (1.1 μm)-Fe(5.5 μm)) layered system from iron and beryllium diffusion coefficients, and inclusions sizes is shown as an example. Conclusion about possible mechanisms change at transition from microscopic consideration to the nano-crystal physics level is given

  12. Theoretical Investigation of Bismuth-Based Semiconductors for Photocatalytic Applications

    KAUST Repository

    Laradhi, Shaikhah

    2017-11-01

    Converting solar energy to clean fuel has gained remarkable attention as an emerged renewable energy resource but optimum efficiency in photocatalytic applications has not yet been reached. One of the dominant factors is designing efficient photocatalytic semiconductors. The research reveals a theoretical investigation of optoelectronic properties of bismuth-based metal oxide and oxysulfide semiconductors using highly accurate first-principles quantum method based on density functional theory along with the range-separated hybrid HSE06 exchange-correlation functional. First, bismuth titanate compounds including Bi12TiO20, Bi4Ti3O12, and Bi2Ti2O7 were studied in a combined experimental and theoretical approach to prove its photocatalytic activity under UV light. They have unique bismuth layered structure, tunable electronic properties, high dielectric constant and low electron and effective masses in one crystallographic direction allowing for good charge separation and carrier diffusion properties. The accuracy of the investigation was determined by the good agreement between experimental and theoretical values. Next, BiVO4 with the highest efficiency for oxygen evolution was investigated. A discrepancy between the experimental and theoretical bandgap was reported and inspired a systematic study of all intrinsic defects of the material and the corresponding effect on the optical and transport properties. A candidate defective structure was proposed for an efficient photocatalytic performance. To overcome the carrier transport limitation, a mild hydrogen treatment was also introduced. Carrier lifetime was enhanced due to a significant reduction of trap-assisted recombination, either via passivation of deep trap states or reduction of trap state density. Finally, an accurate theoretical approach to design a new family of semiconductors with enhanced optoelectronic properties for water splitting was proposed. We simulated the solid solutions Bi1−xRExCuOS (RE = Y, La

  13. Transparency in Transcribing: Making Visible Theoretical Bases Impacting Knowledge Construction from Open-Ended Interview Records

    Directory of Open Access Journals (Sweden)

    Audra Skukauskaite

    2012-01-01

    Full Text Available This article presents a reflexive analysis of two transcripts of an open-ended interview and argues for transparency in transcribing processes and outcomes. By analyzing ways in which a researcher's theories become consequential in producing and using transcripts of an open-ended interview, this paper makes visible the importance of examining and presenting theoretical bases of transcribing decisions. While scholars across disciplines have argued that transcribing is a theoretically laden process (GREEN, FRANQUIZ & DIXON, 1997; KVALE & BRINKMAN, 2009, few have engaged in reflexive analyses of the data history to demonstrate the consequences particular theoretical and methodological approaches pose in producing knowledge claims and inciting dialogues across traditions. The article demonstrates how theory-method-claim relationships in transcribing influence research transparency and warrantability. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1201146

  14. Unsupervised active learning based on hierarchical graph-theoretic clustering.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve

    2009-10-01

    Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.

  15. In service monitoring based on fatigue analyses, possibilities and limitations

    International Nuclear Information System (INIS)

    Dittmar, S.; Binder, F.

    2004-01-01

    German LWR reactors are equipped with monitoring systems which are to enable a comparison of real transients with load case catalogues and fatigue catalogues for fatigue analyses. The information accuracy depends on the accuracy of measurements, on the consideration of parameters influencing fatigue (medium, component surface, component size, etc.), and on the accuracy of the load analyses. The contribution attempts a critical evaluation, also inview of the fact that real fatigue damage often are impossible to quantify on the basis of fatigue analyses at a later stage. The effects of the consideration or non-consideration of various influencing factors are discussed, as well as the consequences of the scatter of material characteristics on which the analyses are based. Possible measures to be taken in operational monitoring are derived. (orig.) [de

  16. A bromine-based dichroic X-ray polarization analyser

    CERN Document Server

    Collins, S P; Brown, S D; Thompson, P

    2001-01-01

    We have demonstrated the advantages offered by dichroic X-ray polarization filters for linear polarization analysis, and describe such a device, based on a dibromoalkane/urea inclusion compound. The polarizer has been successfully tested by analysing the polarization of magnetic diffraction from holmium.

  17. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  18. Theoretical model and optimization of a novel temperature sensor based on quartz tuning fork resonators

    International Nuclear Information System (INIS)

    Xu Jun; You Bo; Li Xin; Cui Juan

    2007-01-01

    To accurately measure temperatures, a novel temperature sensor based on a quartz tuning fork resonator has been designed. The principle of the quartz tuning fork temperature sensor is that the resonant frequency of the quartz resonator changes with the variation in temperature. This type of tuning fork resonator has been designed with a new doubly rotated cut work at flexural vibration mode as temperature sensor. The characteristics of the temperature sensor were evaluated and the results sufficiently met the target of development for temperature sensor. The theoretical model for temperature sensing has been developed and built. The sensor structure was analysed by finite element method (FEM) and optimized, including tuning fork geometry, tine electrode pattern and the sensor's elements size. The performance curve of output versus measured temperature is given. The results from theoretical analysis and experiments indicate that the sensor's sensitivity can reach 60 ppm 0 C -1 with the measured temperature range varying from 0 to 100 0 C

  19. Analyser-based phase contrast image reconstruction using geometrical optics

    International Nuclear Information System (INIS)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-01-01

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 μm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser

  20. Analyser-based phase contrast image reconstruction using geometrical optics.

    Science.gov (United States)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  1. THEORETICAL BASES OF DIVERSIFICATION OF PENITENTIARY EDUCATIONAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Нэилэ Каюмовна Щепкина

    2013-08-01

    Full Text Available The article deals with the main results of scientific research devoted to the question of theoretical bases of diversification of penitentiary educational system in institutions of confinement.The urgency of scientific research reveals through the social importance of convicts’ education.The article draws attention to the fact that the problem of diversification of penitentiary educational system hasn’t been considered in pedagogy yet.  It also identifies the main contradictions, tasks and methods of scientific research.Retrospective analysis of criminal system inRussiahelps to define the existing tendencies of convicts’ education, unsolved problems in this field of science and formulate perspective ideas to modernize the penitentiary educational system.The item tells about the main point of diversification of penitentiary educational system and presents it in a model. It gives detailed analysis of model’s components and depicts some practical ways of its embodiment in institutions of confinement. Moreover the article describes the determinants of diversification of penitentiary educational system which are seemed to be the factors and conditions of its effective development.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-20

  2. Organizing the public health-clinical health interface: theoretical bases.

    Science.gov (United States)

    St-Pierre, Michèle; Reinharz, Daniel; Gauthier, Jacques-Bernard

    2006-01-01

    This article addresses the issue of the interface between public health and clinical health within the context of the search for networking approaches geared to a more integrated delivery of health services. The articulation of an operative interface is complicated by the fact that the definition of networking modalities involves complex intra- and interdisciplinary and intra- and interorganizational systems across which a new transversal dynamics of intervention practices and exchanges between service structures must be established. A better understanding of the situation is reached by shedding light on the rationale underlying the organizational methods that form the bases of the interface between these two sectors of activity. The Quebec experience demonstrates that neither the structural-functionalist approach, which emphasizes remodelling establishment structures and functions as determinants of integration, nor the structural-constructivist approach, which prioritizes distinct fields of practice in public health and clinical health, adequately serves the purpose of networking and integration. Consequently, a theoretical reframing is imperative. In this regard, structuration theory, which fosters the simultaneous study of methods of inter-structure coordination and inter-actor cooperation, paves the way for a better understanding of the situation and, in turn, to the emergence of new integration possibilities.

  3. Theoretical bases of individualization of training in wrestling

    Directory of Open Access Journals (Sweden)

    S.V. Latyshev

    2013-04-01

    Full Text Available Theoretical bases of individualization of training in wrestling are developed. They include the structure of organization of research, positions of conception, system of individualization of training. The system of individualization of training is designed as an aggregate of elements and subsystems, which guided mutually assist an exposure, forming, development and perfection of own style of opposing. It is marked that in the system of training activity substantially more attention is spared development of the special endurance and attended directed qualities. In the system of after training and after a competition activity an accent was displaced toward the search of facilities of more effective renewal and stimulation of the special capacity, search of new optimum rations of feed and new food additions, search of new methods of decline of weight of fighters. Tactic of conduct of duels changed in the system of competition activity, which foresees yet more rational and economy expense of energy in a fight and in a competition on the whole.

  4. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  5. Engineering design and exergy analyses for combustion gas turbine based power generation system

    International Nuclear Information System (INIS)

    Sue, D.-C.; Chuang, C.-C.

    2004-01-01

    This paper presents the engineering design and theoretical exergetic analyses of the plant for combustion gas turbine based power generation systems. Exergy analysis is performed based on the first and second laws of thermodynamics for power generation systems. The results show the exergy analyses for a steam cycle system predict the plant efficiency more precisely. The plant efficiency for partial load operation is lower than full load operation. Increasing the pinch points will decrease the combined cycle plant efficiency. The engineering design is based on inlet air-cooling and natural gas preheating for increasing the net power output and efficiency. To evaluate the energy utilization, one combined cycle unit and one cogeneration system, consisting of gas turbine generators, heat recovery steam generators, one steam turbine generator with steam extracted for process have been analyzed. The analytical results are used for engineering design and component selection

  6. Integrating cross-case analyses and process tracing in set-theoretic research: Strategies and parameters of debate

    DEFF Research Database (Denmark)

    Beach, Derek; Rohlfing, Ingo

    2018-01-01

    In recent years, there has been increasing interest in the combination of two methods on the basis of set theory. In our introduction and this special issue, we focus on two variants of cross-case set-theoretic methods - Qualitative Comparative Analysis (QCA) and typological theory...... – and their combination with process tracing. Our goal is to broaden and deepen set-theoretic empirical research and equip scholars with guidance on how to implement it in multi-method research (MMR). At first glance, set-theoretic cross-case methods and process tracing seem to be highly compatible when causal...... relationships are conceptualized in terms of set-theory. However, multiple issues have not so far been thoroughly addressed. Our paper builds on the emerging MMR literature and seeks to enhance it in four ways. First, we offer a comprehensive and coherent elaboration of the two sequences in which case studies...

  7. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... marine sources using both conventional small-subunit (SSU) rRNA gene analyses and our quantitative method to calculate the proportion of genomes in each sample that are capable of a particular metabolic trait. With both environments, to determine what proportion of each community they make up and how......). These analyses demonstrate how genome proportionality compares to SSU rRNA gene relative abundance and how factors such as average genome size and SSU rRNA gene copy number affect sampling probability and therefore both types of community analysis....

  8. Process for carrying out analyses based on concurrent reactions

    Energy Technology Data Exchange (ETDEWEB)

    Glover, J S; Shepherd, B P

    1980-01-03

    The invention refers to a process for carrying out analyses based on concurrent reactions. A part of a compound to be analysed is subjected with a standard quantity of this compound in a labelled form to a common reaction with a standard quantity of a reagent, which must be less than the sum of the two parts of the reacting compound. The parts of the marked reaction compound and the labelled final compound resulting from the concurrence are separated in a tube (e.g. by centrifuging) after forced phase change (precipitation, absorption etc.) and the radio-activity of both phases in contact is measured separately. The shielded measuring device developed for this and suitable for centrifuge tubes of known dimensions is also included in the patent claims. The insulin concentration of a defined serum is measured as an example of the applications of the method (Radioimmunoassay).

  9. Integrating Cross-Case Analyses and Process Tracing in Set-Theoretic Research: Strategies and Parameters of Debate

    Science.gov (United States)

    Beach, Derek; Rohlfing, Ingo

    2018-01-01

    In recent years, there has been increasing interest in the combination of two methods on the basis of set theory. In our introduction and this special issue, we focus on two variants of cross-case set-theoretic methods--"qualitative comparative analysis" (QCA) and typological theory (TT)--and their combination with process tracing (PT).…

  10. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  11. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  12. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  13. Theoretical study of the structure and reactivity of lanthanide and actinide based organometallic complexes

    International Nuclear Information System (INIS)

    Barros, N.

    2007-06-01

    In this PhD thesis, lanthanide and actinide based organometallic complexes are studied using quantum chemistry methods. In a first part, the catalytic properties of organo-lanthanide compounds are evaluated by studying two types of reactions: the catalytic hydro-functionalization of olefins and the polymerisation of polar monomers. The reaction mechanisms are theoretically determined and validated, and the influence of possible secondary non productive reactions is envisaged. A second part focuses on uranium-based complexes. Firstly, the electronic structure of uranium metallocenes is analysed. An analogy with the uranyl compounds is proposed. In a second chapter, two isoelectronic complexes of uranium IV are studied. After validating the use of DFT methods for describing the electronic structure and the reactivity of these compounds, it is shown that their reactivity difference can be related to a different nature of chemical bonding in these complexes. (author)

  14. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    Directory of Open Access Journals (Sweden)

    Daniel Lüdecke

    2014-11-01

    Full Text Available Introduction: Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods: Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results: The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion: Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended.

  15. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    Directory of Open Access Journals (Sweden)

    Daniel Lüdecke

    2014-11-01

    Full Text Available Introduction: Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness.Theory and methods: Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care.Results: The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making.Conclusion and discussion: Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended.

  16. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  17. PROCESS-BASED LEARNING: TOWARDS THEORETICAL AND LECTURE-BASED COURSEWORK IN STUDIO STYLE

    Directory of Open Access Journals (Sweden)

    Hatem Ezzat Nabih

    2010-07-01

    Full Text Available This article presents a process-based learning approach to design education where theoretical coursework is taught in studio-style. Lecture-based coursework is sometimes regarded as lacking in challenge and broadening the gap between theory and practice. Furthermore, lecture-based curricula tend to be detached from the studio and deny students from applying their theoretically gained knowledge. Following the belief that student motivation is increased by establishing a higher level of autonomy in the learning process, I argue for a design education that links theory with applied design work within the studio setting. By synthesizing principles of Constructivist Learning and Problem-Based Learning, PBL students are given greater autonomy by being actively involved in their education. Accordingly, I argue for a studio setting that incorporates learning in studio style by presenting three design applications involving students in investigation and experimentation in order to self-experience the design process.

  18. Theoretical comparison between solar combisystems based on bikini tanks and tank-in-tank solar combisystems

    DEFF Research Database (Denmark)

    Yazdanshenas, Eshagh; Furbo, Simon; Bales, Chris

    2008-01-01

    Theoretical investigations have shown that solar combisystems based on bikini tanks for low energy houses perform better than solar domestic hot water systems based on mantle tanks. Tank-in-tank solar combisystems are also attractive from a thermal performance point of view. In this paper......, theoretical comparisons between solar combisystems based on bikini tanks and tank-in-tank solar combisystems are presented....

  19. Theoretical Investigations of Plasma-Based Accelerators and Other Advanced Accelerator Concepts

    International Nuclear Information System (INIS)

    Shuets, G.

    2004-01-01

    Theoretical investigations of plasma-based accelerators and other advanced accelerator concepts. The focus of the work was on the development of plasma based and structure based accelerating concepts, including laser-plasma, plasma channel, and microwave driven plasma accelerators

  20. Theoretical and methodological bases of the cooperation and the cooperative

    Directory of Open Access Journals (Sweden)

    Claudio Alberto Rivera Rodríguez

    2013-12-01

    Full Text Available The present work has the purpose to approach the theoretical and methodological foundations of the rise of the cooperatives. In this article are studied the logical antecedents of the cooperativism, the premises  establish by  the Industrial Revolution for the emergence of the first modern cooperative “The Pioneers of Rochdale”  that  is  the inflection point of  cooperativism, until analyzing the contributions of the whole thinking  of the time that maintain this process.

  1. Theoretical analysis of noncanonical base pairing interactions in ...

    Indian Academy of Sciences (India)

    PRAKASH KUMAR

    Noncanonical base pairs in RNA have strong structural and functional implications but are currently not considered ..... Full optimizations of the systems were also carried out using ... of the individual bases in the base pair through the equation.

  2. Operational Satellite-based Surface Oil Analyses (Invited)

    Science.gov (United States)

    Streett, D.; Warren, C.

    2010-12-01

    During the Deepwater Horizon spill, NOAA imagery analysts in the Satellite Analysis Branch (SAB) issued more than 300 near-real-time satellite-based oil spill analyses. These analyses were used by the oil spill response community for planning, issuing surface oil trajectories and tasking assets (e.g., oil containment booms, skimmers, overflights). SAB analysts used both Synthetic Aperture Radar (SAR) and high resolution visible/near IR multispectral satellite imagery as well as a variety of ancillary datasets. Satellite imagery used included ENVISAT ASAR (ESA), TerraSAR-X (DLR), Cosmo-Skymed (ASI), ALOS (JAXA), Radarsat (MDA), ENVISAT MERIS (ESA), SPOT (SPOT Image Corp.), Aster (NASA), MODIS (NASA), and AVHRR (NOAA). Ancillary datasets included ocean current information, wind information, location of natural oil seeps and a variety of in situ oil observations. The analyses were available as jpegs, pdfs, shapefiles and through Google, KML files and also available on a variety of websites including Geoplatform and ERMA. From the very first analysis issued just 5 hours after the rig sank through the final analysis issued in August, the complete archive is still publicly available on the NOAA/NESDIS website http://www.ssd.noaa.gov/PS/MPS/deepwater.html SAB personnel also served as the Deepwater Horizon International Disaster Charter Project Manager (at the official request of the USGS). The Project Manager’s primary responsibility was to acquire and oversee the processing and dissemination of satellite data generously donated by numerous private companies and nations in support of the oil spill response including some of the imagery described above. SAB has begun to address a number of goals that will improve our routine oil spill response as well as help assure that we are ready for the next spill of national significance. We hope to (1) secure a steady, abundant and timely stream of suitable satellite imagery even in the absence of large-scale emergencies such as

  3. Theoretic base of Edge Local Mode triggering by vertical displacements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z. T. [Southwestern Institute of Physics, Chengdu 610041 (China); College of Physics Science and Technology, Sichuan University, Chengdu 610065 (China); He, Z. X.; Wang, Z. H. [Southwestern Institute of Physics, Chengdu 610041 (China); Wu, N.; Tang, C. J. [College of Physics Science and Technology, Sichuan University, Chengdu 610065 (China)

    2015-05-15

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  4. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  5. Theoretical foundations for evidence-based health informatics

    DEFF Research Database (Denmark)

    Scott, P.; Georgiou, A.; Hypponen, H.

    2016-01-01

    such as conditional imputation and a ‘‘missing’’- category in multiple regression analyses. Health—exploring complexity: an interdisciplinary systems approach HEC2016 S23 123 Methods A cross sectional survey in a random sample of Danish fishermen was done in 2015 with application of the Nordic questionnaire...... considering relevant confounders were used to look at each single pain site with missing as an additional outcome. In all analyses, sideline occupations, work position, vessel type, education, and duration at sea were considered as further predictors. Results The prevalence of pain was high for all...... excluding all missing’s revealed similar results. Multinomial regression models showed that workload was the only consistent predictor for musculoskeletal pain, in particular regarding upper and lower limb pain. Two additional predictors were found for the nine different pain locations models; sideline...

  6. Value-based management: Theoretical base, shareholders' request and the concept

    Directory of Open Access Journals (Sweden)

    Kaličanin Đorđe M.

    2005-01-01

    Full Text Available The pressure of financial markets, which is a consequence of shareholder revolution, directly affects the solution to the following dilemma: is the mission of corporations to maximize shareholders' wealth or to satisfy interests of other stakeholders? The domination of shareholder theory has caused the appearance of the valuebased management concept. Value-based management is a relevant concept and a process of management in modern environment. The importance of shareholder value requires transformation of traditional enterprise into value driven enterprise. This paper addresses theoretical base, shareholder revolution and the main characteristics of value-based management.

  7. Conducting Meta-Analyses Based on p Values

    Science.gov (United States)

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  8. PC based 8K multichannel analyser for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Jain, S.K.; Gupta, J.D.; Suman Kumari, B.

    1989-01-01

    An IBM-PC based 8K multichannel analyser(MCA) has been developed which incorporates all the features of an advanced system like very high throughput for data acquisition in PHA as well as MCS modes, fast real-time display, extensive display manipulation facilities, various present controls and concurrent data processing. The compact system hardware consists of a 2 bit wide NIM module and a PC add-on card. Because of external acquisition hardware, the system after initial programming by PC can acquire data independently allowing the PC to be switched off. To attain very high throughput, the most desirable feature of an MCA, a dual-port memory architecture has been used. The asymmetric dual-port RAM, housed in the NIM module offers 24 bit parallel access to the ADC and 8 bit wide access to PC which results in fast real-time histogramic display on the monitor. PC emulation software is menu driven and user friendly. It integrates a comprehensive set of commonly required application routines for concurrent data processing. After the transfer of know-how to the Electronic Corporation of India Ltd. (ECIL), this system is bein g produced at ECIL. (author). 5 refs., 4 figs

  9. Analyser-based x-ray imaging for biomedical research

    International Nuclear Information System (INIS)

    Suortti, Pekka; Keyriläinen, Jani; Thomlinson, William

    2013-01-01

    Analyser-based imaging (ABI) is one of the several phase-contrast x-ray imaging techniques being pursued at synchrotron radiation facilities. With advancements in compact source technology, there is a possibility that ABI will become a clinical imaging modality. This paper presents the history of ABI as it has developed from its laboratory source to synchrotron imaging. The fundamental physics of phase-contrast imaging is presented both in a general sense and specifically for ABI. The technology is dependent on the use of perfect crystal monochromator optics. The theory of the x-ray optics is developed and presented in a way that will allow optimization of the imaging for specific biomedical systems. The advancement of analytical algorithms to produce separate images of the sample absorption, refraction angle map and small-angle x-ray scattering is detailed. Several detailed applications to biomedical imaging are presented to illustrate the broad range of systems and body sites studied preclinically to date: breast, cartilage and bone, soft tissue and organs. Ultimately, the application of ABI in clinical imaging will depend partly on the availability of compact sources with sufficient x-ray intensity comparable with that of the current synchrotron environment. (paper)

  10. Progress Report on Computational Analyses of Water-Based NSTF

    Energy Technology Data Exchange (ETDEWEB)

    Lv, Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Kraus, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Bucknor, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Lisowski, D. [Argonne National Lab. (ANL), Argonne, IL (United States); Nunez, D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-01

    CFD analysis has been focused on important component-level phenomena using STARCCM+ to supplement the system analysis of integral system behavior. A notable area of interest was the cavity region. This area is of particular interest for CFD analysis due to the multi-dimensional flow and complex heat transfer (thermal radiation heat transfer and natural convection), which are not simulated directly by RELAP5. CFD simulations allow for the estimation of the boundary heat flux distribution along the riser tubes, which is needed in the RELAP5 simulations. The CFD results can also provide additional data to help establish what level of modeling detail is necessary in RELAP5. It was found that the flow profiles in the cavity region are simpler for the water-based concept than for the air-cooled concept. The local heat flux noticeably increases axially, and is higher in the fins than in the riser tubes. These results were utilized in RELAP5 simulations as boundary conditions, to provide better temperature predictions in the system level analyses. It was also determined that temperatures were higher in the fins than the riser tubes, but within design limits for thermal stresses. Higher temperature predictions were identified in the edge fins, in part due to additional thermal radiation from the side cavity walls.

  11. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  12. THE THEORETICAL ASTROPHYSICAL OBSERVATORY: CLOUD-BASED MOCK GALAXY CATALOGS

    Energy Technology Data Exchange (ETDEWEB)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria, 3122 (Australia)

    2016-03-15

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  13. THE THEORETICAL ASTROPHYSICAL OBSERVATORY: CLOUD-BASED MOCK GALAXY CATALOGS

    International Nuclear Information System (INIS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-01-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future

  14. Dashboard auditing of ABC (Activity-Based Costing). Theoretical approaches

    OpenAIRE

    Căpuşneanu, Sorinel/I

    2009-01-01

    This article aims to define the dashboard auditing according to the specifics of Activity-Based Costing method (ABC). It describes the main objectives of dashboard auditing, the criteria that a dashboard auditor should meet and the step-by-step stages of the entire dashboard auditing process of an enterprise from steel industry according to the Activity-Based Costing method (ABC).

  15. Theoretical study of GC+/GC base pair derivatives

    International Nuclear Information System (INIS)

    Meng Fancui; Wang Huanjie; Xu Weiren; Liu Chengbu

    2005-01-01

    The geometries of R (R=CH 3 , CH 3 O, F, NO 2 ) substituted GC base pair derivatives and their cations have been optimized at B3LYP/6-31G* level and the substituent effects on the neutral and cationic geometric structures and energies have been discussed. The inner reorganization energies of various base pair derivatives and the native GC base pair have been calculated to discuss the substituent effects on the reorganization energy. NBO (natural bond orbital) analysis has been carried out on both the neutral and the cationic systems to investigate the differences of the charge distributions and the electronic structures. The outcomes indicate that 8-CH 3 O-G:C has the greatest reorganization energy and 8-NO 2 -G:C has the least, while the other substituted base pairs have a reorganization energy close to that of G:C. The one charge is mostly localized on guanine part after ionization and as high as 0.95e. The bond distances of N1-N3'andN2-O2' in the cationic base pair derivatives shortened and that of O6-N4' elongated as compared with the corresponding bond distances of the neutral GC base pair derivatives

  16. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.

  17. Complementary Theoretical Perspectives on Task-Based Classroom Realities

    Science.gov (United States)

    Jackson, Daniel O.; Burch, Alfred Rue

    2017-01-01

    Tasks are viewed as a principled foundation for classroom teaching, social interaction, and language development. This special issue sheds new light on how task-based classroom practices are supported by a diverse range of principles. This introduction describes current trends in classroom practice and pedagogic research in relation to task-based…

  18. Proto-ribosome: a theoretical approach based on RNA relics

    OpenAIRE

    Demongeot, Jacques

    2017-01-01

    We describe in this paper, based on already published articles, a contribution to the theory postulating the existence of a proto-ribosome, which could have appeared early at the origin of life and we discuss the interest of this notion in an evolutionary perspective, taking into account the existence of possible RNA relics of this proto-ribosome.

  19. Simulation-based Investigations of Electrostatic Beam Energy Analysers

    CERN Document Server

    Pahl, Hannes

    2015-01-01

    An energy analyser is needed to measure the beam energy profile behind the REX-EBIS at ISOLDE. The device should be able to operate with an accuracy of 1 V at voltages up to 30 kV. In order to find a working concept for an electrostatic energy analyser different designs were evaluated with simulations. A spherical device and its design issues are presented. The potential deformation effects of grids at high voltages and their influence on the energy resolution were investigated. First tests were made with a grid-free ring electrode device and show promising results.

  20. Techniques for Scaling Up Analyses Based on Pre-interpretations

    DEFF Research Database (Denmark)

    Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

    2005-01-01

    a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

  1. Experimental and Theoretical Study of Microturbine-Based BCHP System

    International Nuclear Information System (INIS)

    Fairchild, P.D.

    2001-01-01

    On-site and near-site distributed power generation (DG), as part of a Buildings Cooling, Heating and Power (BCHP) system, brings both electricity and waste heat from the DG sources closer to the end user's electric and thermal loads. Consequently, the waste heat can be used as input power for heat-activated air conditioners, chillers, and desiccant dehumidification systems; to generate steam for space heating; or to provide hot water for laundry, kitchen, cleaning services and/or rest rooms. By making use of what is normally waste heat, BCHP systems meet a building's electrical and thermal loads with a lower input of fossil fuel, yielding resource efficiencies of 40 to 70% or more. To ensure the success of BCHP systems, interactions of a DG system-such as a microturbine and thermal heat recovery units under steady-state modes of operation with various exhaust back pressures-must be considered. This article studies the performance and emissions of a 30-kW microturbine over a range of design and off-design conditions in steady-state operating mode with various back pressures. In parallel with the experimental part of the project, a BCHP mathematical model was developed describing basic thermodynamic and hydraulic processes in the system, heat and material balances, and the relationship of the balances. to the system configuration. The model can determine the efficiency of energy conversion both for an individual microturbine unit and for the entire BCHP system for various system configurations and external loads. Based on actual data Tom a 30-kW microturbine, linear analysis was used to obtain an analytical relationship between the changes in the thermodynamic and hydraulic parameters of the system. The actual data show that, when the backpressure at the microturbine exhaust outlet is increased to the maximum of 7 in. WC (0.017 atm), the microturbine's useful power output decreases by from 3.5% at a full power setting of 30 kW to 5.5% at a one-third power setting (10

  2. Theoretical analyses of (n,xn) reactions on sup 235 U, sup 238 U, sup 237 Np, and sup 239 Pu for ENDF/B-VI

    Energy Technology Data Exchange (ETDEWEB)

    Young, P.G.; Arthur, E.D.

    1991-01-01

    Theoretical analyses were performed of neutron-induced reactions on {sup 235}U, {sup 238}U, {sup 237}Np, and {sup 239}Pu between 0.01 and 20 MeV in order to calculate neutron emission cross sections and spectra for ENDF/B-VI evaluations. Coupled-channel optical model potentials were obtained for each target nucleus by fitting total, elastic, and inelastic scattering cross section data, as well as low-energy average resonance data. The resulting deformed optical model potentials were used to calculate direct (n,n{prime}) cross sections and transmission coefficients for use in Hauser-Feshbach statistical theory analyses. A fission model with multiple barrier representation, width fluctuation corrections, and preequilibrium corrections were included in the analyses. Direct cross sections for higher-lying vibrational states were calculated using DWBA theory, normalized using B(E{ell}) values determined from (d,d{prime}) and Coulomb excitation data, where available, and from systematics otherwise. Initial fission barrier parameters and transition state density enhancements appropriate to the compound systems involved were obtained from previous analyses, especially fits to charged-particle fission probability data. The parameters for the fission model were adjusted for each target system to obtain optimum agreement with direct (n,f) cross section measurements, taking account of the various multichance fission channels, that is, the different compound systems involved. The results from these analyses were used to calculate most of the neutron (n,n), (n,n{prime}), and (n,xn) cross section data in the ENDF/B/VI evaluations for the above nuclei, and all of the energy-angle correlated spectra. The deformed optical model and fission model parameterizations are described. Comparisons are given between the results of these analyses and the previous ENDF/B-V evaluations as well as with the available experimental data. 14 refs., 3 figs., 1 tab.

  3. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children

    Science.gov (United States)

    Courtright, John A.; Courtright, Illene C.

    1976-01-01

    A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)

  4. Awareness-based game-theoretic space resource management

    Science.gov (United States)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  5. Game-Theoretic Models for Usage-based Maintenance Contract

    Science.gov (United States)

    Husniah, H.; Wangsaputra, R.; Cakravastia, A.; Iskandar, B. P.

    2018-03-01

    A usage-based maintenance contracts with coordination and non coordination between two parties is studied in this paper. The contract is applied to a dump truck operated in a mining industry. The situation under study is that an agent offers service contract to the owner of the truck after warranty ends. This contract has only a time limit but no usage limit. If the total usage per period exceeds the maximum usage allowed in the contract, then the owner will be charged an additional cost. In general, the agent (Original Equipment Manufacturer/OEM) provides a full coverage of maintenance, which includes PM and CM under the lease contract. The decision problem for the owner is to select the best option offered that fits to its requirement, and the decision problem for the agent is to find the optimal maintenance efforts for a given price of the service option offered. We first find the optimal decisions using coordination scheme and then with non coordination scheme for both parties.

  6. Theoretical study of impurity effects in iron-based superconductors

    Science.gov (United States)

    Navarro Gastiasoro, Maria; Hirschfeld, Peter; Andersen, Brian

    2013-03-01

    Several open questions remain unanswered for the iron-based superconductors (FeSC), including the importance of electronic correlations and the symmetry of the superconducting order parameter. Motivated by recent STM experiments which show a fascinating variety of resonant defect states in FeSC, we adopt a realistic five-band model including electronic Coulomb correlations to study local effects of disorder in the FeSC. In order to minimize the number of free parameters, we use the pairing interactions obtained from spin-fluctuation exchange to determine the homogeneous superconducting state. The ability of local impurity potentials to induce resonant states depends on their scattering strength Vimp; in addition, for appropriate Vimp, such states are associated with local orbital- and magnetic order. We investigate the density of states near such impurities and show how tunneling experiments may be used to probe local induced order. In the SDW phase, we show how C2 symmetry-breaking dimers are naturally formed around impurities which also form cigar-like (pi,pi) structures embedded in the (pi,0) magnetic bulk phase. Such electronic dimers have been shown to be candidates for explaining the so-called nematogens observed previously by QPI in Co-doped CaFe2As2.

  7. A Web-based Tool Combining Different Type Analyses

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program...... the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type....

  8. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  9. An Apple II -based bidimensional pulse height analyser

    International Nuclear Information System (INIS)

    Bateman, J.E.; Flesher, A.C.; Honeyman, R.N.; Pritchard, T.E.; Price, W.P.R.

    1984-06-01

    The implementation of a pulse height analyser function in an Apple II microcomputer using minimal purpose built hardware is described. Except for a small interface module the system consists of two suites of software, one giving a conventional one dimensional analysis on a span of 1024 channels, and the other a two dimensional analysis on a 128 x 128 image format. Using the recently introduced ACCELERATOR coprocessor card the system performs with a dead time per event of less than 50 μS. Full software facilities are provided for display, storage and processing of the data using standard Applesoft BASIC. (author)

  10. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  11. Sensitivity and uncertainty analyses in aging risk-based prioritizations

    International Nuclear Information System (INIS)

    Hassan, M.; Uryas'ev, S.; Vesely, W.E.

    1993-01-01

    Aging risk evaluations of nuclear power plants using Probabilistic Risk Analyses (PRAs) involve assessments of the impact of aging structures, systems, and components (SSCs) on plant core damage frequency (CDF). These assessments can be used to prioritize the contributors to aging risk reflecting the relative risk potential of the SSCs. Aging prioritizations are important for identifying the SSCs contributing most to plant risk and can provide a systematic basis on which aging risk control and management strategies for a plant can be developed. However, these prioritizations are subject to variabilities arising from uncertainties in data, and/or from various modeling assumptions. The objective of this paper is to present an evaluation of the sensitivity of aging prioritizations of active components to uncertainties in aging risk quantifications. Approaches for robust prioritization of SSCs also are presented which are less susceptible to the uncertainties

  12. Evaluation of an optoacoustic based gas analysing device

    Science.gov (United States)

    Markmann, Janine; Lange, Birgit; Theisen-Kunde, Dirk; Danicke, Veit; Mayorov, Fedor; Eckert, Sebastian; Kettmann, Pascal; Brinkmann, Ralf

    2017-07-01

    The relative occurrence of volatile organic compounds in the human respiratory gas is disease-specific (ppb range). A prototype of a gas analysing device using two tuneable laser systems, an OPO-laser (2.5 to 10 μm) and a CO2-laser (9 to 11 μm), and an optoacoustic measurement cell was developed to detect concentrations in the ppb range. The sensitivity and resolution of the system was determined by test gas measurements, measuring ethylene and sulfur hexafluoride with the CO2-laser and butane with the OPO-laser. System sensitivity found to be 13 ppb for sulfur hexafluoride, 17 ppb for ethylene and Respiratory gas samples of 8 healthy volunteers were investigated by irradiation with 17 laser lines of the CO2-laser. Several of those lines overlap with strong absorption bands of ammonia. As it is known that ammonia concentration increases by age a separation of people 35 was striven for. To evaluate the data the first seven gas samples were used to train a discriminant analysis algorithm. The eighth subject was then assigned correctly to the group >35 years with the age of 49 years.

  13. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  14. Visualizing Confidence in Cluster-Based Ensemble Weather Forecast Analyses.

    Science.gov (United States)

    Kumpf, Alexander; Tost, Bianca; Baumgart, Marlene; Riemer, Michael; Westermann, Rudiger; Rautenhaus, Marc

    2018-01-01

    In meteorology, cluster analysis is frequently used to determine representative trends in ensemble weather predictions in a selected spatio-temporal region, e.g., to reduce a set of ensemble members to simplify and improve their analysis. Identified clusters (i.e., groups of similar members), however, can be very sensitive to small changes of the selected region, so that clustering results can be misleading and bias subsequent analyses. In this article, we - a team of visualization scientists and meteorologists-deliver visual analytics solutions to analyze the sensitivity of clustering results with respect to changes of a selected region. We propose an interactive visual interface that enables simultaneous visualization of a) the variation in composition of identified clusters (i.e., their robustness), b) the variability in cluster membership for individual ensemble members, and c) the uncertainty in the spatial locations of identified trends. We demonstrate that our solution shows meteorologists how representative a clustering result is, and with respect to which changes in the selected region it becomes unstable. Furthermore, our solution helps to identify those ensemble members which stably belong to a given cluster and can thus be considered similar. In a real-world application case we show how our approach is used to analyze the clustering behavior of different regions in a forecast of "Tropical Cyclone Karl", guiding the user towards the cluster robustness information required for subsequent ensemble analysis.

  15. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Science.gov (United States)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  16. Analysing co-articulation using frame-based feature trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J

    2010-11-01

    Full Text Available The authors investigate several approaches aimed at a more detailed understanding of co-articulation in spoken utterances. They find that the Euclidean difference between instantaneous frame-based feature values and the mean values of these features...

  17. PCR and RFLP analyses based on the ribosomal protein operon

    Science.gov (United States)

    Differentiation and classification of phytoplasmas have been primarily based on the highly conserved 16Sr RNA gene. RFLP analysis of 16Sr RNA gene sequences has identified 31 16Sr RNA (16Sr) groups and more than 100 16Sr subgroups. Classification of phytoplasma strains can however, become more refin...

  18. Theoretical frameworks informing family-based child and adolescent obesity interventions

    DEFF Research Database (Denmark)

    Alulis, Sarah; Grabowski, Dan

    2017-01-01

    into focus. However, the use of theoretical frameworks to strengthen these interventions is rare and very uneven. OBJECTIVE AND METHOD: To conduct a qualitative meta-synthesis of family-based interventions for child and adolescent obesity to identify the theoretical frameworks applied, thus understanding how...... inconsistencies and a significant void between research results and health care practice. Based on the analysis, this article proposes three themes to be used as focus points when designing future interventions and when selecting theories for the development of solid, theory-based frameworks for application...... cognitive, self-efficacy and Family Systems Theory appeared most frequently. The remaining 24 were classified as theory-related as theoretical elements of self-monitoring; stimulus control, reinforcement and modelling were used. CONCLUSION: The designs of family-based interventions reveal numerous...

  19. Analysing Leontiev Tube Capabilities in the Space-based Plants

    Directory of Open Access Journals (Sweden)

    N. L. Shchegolev

    2017-01-01

    Full Text Available The paper presents a review of publications dedicated to the gas-dynamic temperature stratification device (the Leontief tube and shows main factors affecting its efficiency. Describes an experimental installation, which is used to obtain data on the value of energy separation in the air to prove this device the operability.The assumption that there is an optimal relationship between the flow velocities in the subsonic and supersonic channels of the gas-dynamic temperature stratification device is experimentally confirmed.The paper conducts analysis of possible ways to raise the efficiency of power plants of various (including space basing, and shows that, currently, a mainstream of increasing efficiency of their operation is to complicate design solutions.A scheme of the closed gas-turbine space-based plant using a mixture of inert gases (helium-xenon one for operation is proposed. What differs it from the simplest variants is a lack of the cooler-radiator and integration into gas-dynamic temperature stratification device and heat compressor.Based on the equations of one-dimensional gas dynamics, it is shown that the total pressure restorability when removing heat in a thermal compressor determines operating capability of this scheme. The exploratory study of creating a heat compressor is performed, and it is shown that when operating on gases with a Prandtl number close to 1, the total pressure does not increase.The operating capability conditions of the heat compressor are operation on gases with a low value of the Prandtl number (helium-xenon mixture at high supersonic velocities and with a longitudinal pressure gradient available.It is shown that there is a region of the low values of the Prandtl number (Pr <0.3 for which, with the longitudinal pressure gradient available in the supersonic flows of a viscous gas, the total pressure can be restored.

  20. Theoretical Model for the Performance of Liquid Ring Pump Based on the Actual Operating Cycle

    Directory of Open Access Journals (Sweden)

    Si Huang

    2017-01-01

    Full Text Available Liquid ring pump is widely applied in many industry fields due to the advantages of isothermal compression process, simple structure, and liquid-sealing. Based on the actual operating cycle of “suction-compression-discharge-expansion,” a universal theoretical model for performance of liquid ring pump was established in this study, to solve the problem that the theoretical models deviated from the actual performance in operating cycle. With the major geometric parameters and operating conditions of a liquid ring pump, the performance parameters such as the actual capacity for suction and discharge, shaft power, and global efficiency can be conveniently predicted by the proposed theoretical model, without the limitation of empiric range, performance data, or the detailed 3D geometry of pumps. The proposed theoretical model was verified by experimental performances of liquid ring pumps and could provide a feasible tool for the application of liquid ring pump.

  1. Design of the storage location based on the ABC analyses

    Science.gov (United States)

    Jemelka, Milan; Chramcov, Bronislav; Kříž, Pavel

    2016-06-01

    The paper focuses on process efficiency and saving storage costs. Maintaining inventory through putaway strategy takes personnel time and costs money. The aim is to control inventory in the best way. The ABC classification based on Villefredo Pareto theory is used for a design of warehouse layout. New design of storage location reduces the distance of fork-lifters, total costs and it increases inventory process efficiency. The suggested solutions and evaluation of achieved results are described in detail. Proposed solutions were realized in real warehouse operation.

  2. Economic evaluation of algae biodiesel based on meta-analyses

    Science.gov (United States)

    Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.

    2017-08-01

    The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.

  3. Evidence for Endothermy in Pterosaurs Based on Flight Capability Analyses

    Science.gov (United States)

    Jenkins, H. S.; Pratson, L. F.

    2005-12-01

    Previous attempts to constrain flight capability in pterosaurs have relied heavily on the fossil record, using bone articulation and apparent muscle allocation to evaluate flight potential (Frey et al., 1997; Padian, 1983; Bramwell, 1974). However, broad definitions of the physical parameters necessary for flight in pterosaurs remain loosely defined and few systematic approaches to constraining flight capability have been synthesized (Templin, 2000; Padian, 1983). Here we present a new method to assess flight capability in pterosaurs as a function of humerus length and flight velocity. By creating an energy-balance model to evaluate the power required for flight against the power available to the animal, we derive a `U'-shaped power curve and infer optimal flight speeds and maximal wingspan lengths for pterosaurs Quetzalcoatlus northropi and Pteranodon ingens. Our model corroborates empirically derived power curves for the modern black-billed magpie ( Pica Pica) and accurately reproduces the mechanical power curve for modern cockatiels ( Nymphicus hollandicus) (Tobalske et al., 2003). When we adjust our model to include an endothermic metabolic rate for pterosaurs, we find a maximal wingspan length of 18 meters for Q. northropi. Model runs using an exothermic metabolism derive maximal wingspans of 6-8 meters. As estimates based on fossil evidence show total wingspan lengths reaching up to 15 meters for Q. northropi, we conclude that large pterosaurs may have been endothermic and therefore more metabolically similar to birds than to reptiles.

  4. Theoretical and numerical studies of TWR based on ESFR core design

    International Nuclear Information System (INIS)

    Zhang, Dalin; Chen, Xue-Nong; Flad, Michael; Rineiski, Andrei; Maschek, Werner

    2013-01-01

    Highlights: • The traveling wave reactor (TWR) is studied based on the core design of the European Sodium-cooled Fast Reactor (ESFR). • The conventional fuel shuffling technique is used to produce a continuous radial fuel movement. • A stationary self sustainable nuclear fission power can be established asymptotically by only loading natural or depleted uranium. • The multi-group deterministic neutronic code ERANOS is applied. - Abstract: This paper deals with the so-called traveling wave reactor (TWR) based on the core design of the European Sodium-cooled Fast Reactor (ESFR). The current concept of TWR is to use the conventional radial fuel shuffling technique to produce a continuous radial fuel movement so that a stationary self sustainable nuclear fission power can be established asymptotically by only loading fertile material consisting of natural or depleted uranium. The core design of ESFR loaded with metallic uranium fuel without considering the control mechanism is used as a practical application example. The theoretical studies focus mainly on qualitative feasibility analyses, i.e. to identify out in general essential parameter dependences of such a kind of reactor. The numerical studies are carried out more specifically on a certain core design. The multi-group deterministic neutronic code ERANOS with the JEFF3.1 data library is applied as a basic tool to perform the neutronics and burn-up calculations. The calculations are performed in a 2-D R-Z geometry, which is sufficient for the current core layout. Numerical results of radial fuel shuffling indicate that the asymptotic k eff parabolically varies with the shuffling period, while the burn-up increases linearly. Typical shuffling periods investigated in this study are in the range of 300–1000 days. The important parameters, e.g. k eff , the burn-up, the power peaking factor, and safety coefficients are calculated

  5. Theoretical and Experimental Analysis of Adsorption in Surface-based Biosensors

    DEFF Research Database (Denmark)

    Hansen, Rasmus

    The present Ph.D. dissertation concerns the application of surface plasmon resonance (SPR) spectroscopy, which is a surface-based biosensor technology, for studies of adsorption dynamics. The thesis contains both experimental and theoretical work. In the theoretical part we develop the theory...... cell of the surface-based biosensor, in addition to the sensor surface, is investigated. In the experimental part of the thesis we use a Biacore SPR sensor to study lipase adsorption on model substrate surfaces, as well as competitive adsorption of lipase and surfactants. A part of the experimental...

  6. Theoretical strengthening of the concept of appealing in analysed sermons on Matthew 25:31–46 in the context of poverty in South Africa

    Directory of Open Access Journals (Sweden)

    Hennie J.C. Pieterse

    2013-08-01

    Full Text Available From a qualitative grounded theory analysis in a sample of 26 sermons with Matthew 25:31–46 as sermon text, a rhetorical structure of how the preachers try to convince their listeners to care for the poor emerged. The homiletical concept of appealingrelated to all the categories borne out of the analysis of the inner world of the 26 sermons, and also to the categories showing this rhetorical structure in the sermons. The article discusses what the dimensions are in the concept of appealingborne out of the sermons in which the rhetorical structure was apparent, which rhetorical theory would fit as theoretical base for the concept of appealing in its relationship with the rhetorical structure in the sermons, and what dilemma the preachers face when they try to convince their listeners to participate in the care for the poor. The rhetorical theory of deliberative rhetoric (Aristotle and the classical theory with the three dimensions logos, ethosand pathosis discussed in this article as theoretical thickening of the concept of appealingto the listeners of the sermons. This article attempts to demonstrate how to go about theorising from a grounded theory analysis of sermons with Matthew 25:31–46 as a sermon text with, as result, a theory that could help preachers in preaching from this text in the context of poverty in South Africa.

  7. FY01 Supplemental Science and Performance Analyses, Volume 1: Scientific Bases and Analyses, Part 1 and 2

    International Nuclear Information System (INIS)

    Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23 013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate

  8. Theoretical and empirical bases for dialect-neutral language assessment: contributions from theoretical and applied linguistics to communication disorders.

    Science.gov (United States)

    Pearson, Barbara Zurer

    2004-02-01

    Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups.

  9. Substituent effif ects on hydrogen bonding in Watson-Crick base pairs. A theoretical study

    NARCIS (Netherlands)

    Fonseca Guerra, C.; van der Wijst, T.; Bickelhaupt, F.M.

    2005-01-01

    We have theoretically analyzed Watson-Crick AT and GC base pairs in which purine C8 and/or pyrimidine C6 positions carry a substituent X = H, F, Cl or Br, using the generalized gradient approximation (GGA) of density functional theory at BP86/TZ2P. The purpose is to study the effects on structure

  10. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    Science.gov (United States)

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  11. Theoretical and Experimental Study on Secondary Piezoelectric Effect Based on PZT-5

    International Nuclear Information System (INIS)

    Zhang, Z H; Sun, B Y; Shi, L P

    2006-01-01

    The purpose of this paper is to confirm the existence of secondary and multiple piezoelectric effect theoretically and experimentally. Based on Heckmann model showing the relationship among mechanical, electric and heat energy and the physical model on mechanical, electric, heat, and magnetic energy, theoretical analysis of multiple piezoelectric effect is made through four kinds of piezoelectric equations. Experimental research of secondary direct piezoelectric effect is conducted through adopting PZT-5 piles. The result of the experiment indicates that charge generated by secondary direct piezoelectric effect as well as displacement caused by first converse piezoelectric effect keeps fine linearity with the applied voltage

  12. How can results from macro economic analyses of the energy consumption of households be used in macro models? A discussion of theoretical and empirical literature about aggregation

    International Nuclear Information System (INIS)

    Halvorsen, Bente; Larsen, Bodil M.; Nesbakken, Runa

    2001-01-01

    The literature on energy demand shows that there are systematic differences in income- and price elasticity from analyses based on macro data and micro data. Even if one estimates models with the same explanatory variables, the results may differ with respect to estimated price- and income sensitivity. These differences may be caused by problems involved in transferring micro properties to macro properties, or the estimated macro relationships have failed to adequately consideration the fact that households behave differently in their energy demand. Political goals are often directed towards the entire household sector. Partial equilibrium models do not capture important equilibrium effects and feedback through the energy markets and the economy in general. Thus, it is very interesting, politically and scientifically, to do macro economic model analyses of different political measures that affect the energy consumption. The results of behavioural analyses, in which one investigates the heterogeneity of the energy demand, must be based on information about individual households. When the demand is studied based on micro data, it is difficult to aggregate its properties to a total demand function for the entire household sector if different household sectors have different behaviour. Such heterogeneity of behaviour may for instance arise when households in different regions have different heating equipment because of regional differences in the price of electricity. The subject of aggregation arises immediately when one wants to draw conclusions about the household sector based on information about individual households, whether the discussion is about the whole population or a selection of households. Thus, aggregation is a topic of interest in a wide range of problems

  13. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  14. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  15. Study of network resource allocation based on market and game theoretic mechanism

    Science.gov (United States)

    Liu, Yingmei; Wang, Hongwei; Wang, Gang

    2004-04-01

    We work on the network resource allocation issue concerning network management system function based on market-oriented mechanism. The scheme is to model the telecommunication network resources as trading goods in which the various network components could be owned by different competitive, real-world entities. This is a multidisciplinary framework concentrating on the similarity between resource allocation in network environment and the market mechanism in economic theory. By taking an economic (market-based and game theoretic) approach in routing of communication network, we study the dynamic behavior under game-theoretic framework in allocating network resources. Based on the prior work of Gibney and Jennings, we apply concepts of utility and fitness to the market mechanism with an intention to close the gap between experiment environment and real world situation.

  16. A theoretically based evaluation of HIV/AIDS prevention campaigns along the trans-Africa highway in Kenya.

    Science.gov (United States)

    Witte, K; Cameron, K A; Lapinski, M K; Nzyuko, S

    1998-01-01

    Print HIV/AIDS prevention campaign materials (e.g., posters, pamphlets, stickers) from 10 public health organizations in Kenya were evaluated according to the Extended Parallel Process Model (EPPM), a health behavior change theory based on the fear appeal literature, at various sites along the Trans-Africa Highway in Kenya. Three groups each of commercial sex workers (CSWs), truck drivers (TDs) and their assistants (ASSTs), and young men (YM) who live and work at the truck stops participated in focus group discussions where reactions to the campaign materials were gathered according to this theoretical base. Reactions to campaign materials varied substantially, according to the poster or pamphlet viewed. Overall, most participants wanted more detailed information about (a) the proper way to use condoms, (b) ideas for how to negotiate condom use with reluctant partners, and (c) accurate information on symptoms of AIDS and what to do once one contracted HIV. Both quantitative and qualitative analyses of the campaign materials are reported.

  17. Orbitrap-based mass analyser for in-situ characterization of asteroids: ILMA, Ion Laser Mass Analyser

    Science.gov (United States)

    Briois, C.; Cotti, H.; Thirkell, L.; Space Orbitrap Consortium[K. Aradj, French; Bouabdellah, A.; Boukrara, A.; Carrasco, N.; Chalumeau, G.; Chapelon, O.; Colin, F.; Coll, P.; Engrand, C.; Grand, N.; Kukui, A.; Lebreton, J.-P.; Pennanech, C.; Szopa, C.; Thissen, R.; Vuitton, V.; Zapf], P.; Makarov, A.

    2014-07-01

    Since about a decade the boundaries between comets and carbonaceous asteroids are fading [1,2]. No doubt that the Rosetta mission should bring a new wealth of data on the composition of comets. But as promising as it may look, the mass resolving power of the mass spectrometers onboard (so far the best on a space mission) will only be able to partially account for the diversity of chemical structures present. ILMA (Ion-Laser Mass Analyser) is a new generation high mass resolution LDI-MS (Laser Desorption-Ionization Mass Spectrometer) instrument concept using the Orbitrap technique, which has been developed in the frame of the two Marco Polo & Marco Polo-R proposals to the ESA Cosmic Vision program. Flagged by ESA as an instrument concept of interest for the mission in 2012, it has been under study for a few years in the frame of a Research and Technology (R&T) development programme between 5 French laboratories (LPC2E, IPAG, LATMOS, LISA, CSNSM) [3,4], partly funded by the French Space Agency (CNES). The work is undertaken in close collaboration with the Thermo Fisher Scientific Company, which commercialises Orbitrap-based laboratory instruments. The R&T activities are currently concentrating on the core elements of the Orbitrap analyser that are required to reach a sufficient maturity level for allowing design studies of future space instruments. A prototype is under development at LPC2E and a mass resolution (m/Δm FWHM) of 100,000 as been obtained at m/z = 150 for a background pressure of 10^{-8} mbar. ILMA would be a key instrument to measure the molecular, elemental and isotopic composition of objects such as carbonaceous asteroids, comets, or other bodies devoid of atmosphere such as the surface of an icy satellite, the Moon, or Mercury.

  18. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 2; Geolocation, calibration, and ERBE-like analyses (subsystems 1-3)

    Science.gov (United States)

    Wielicki, B. A. (Principal Investigator); Barkstrom, B. R. (Principal Investigator); Charlock, T. P.; Baum, B. A.; Green, R. N.; Minnis, P.; Smith, G. L.; Coakley, J. A.; Randall, D. R.; Lee, R. B., III

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 2 details the techniques used to geolocate and calibrate the CERES scanning radiometer measurements of shortwave and longwave radiance to invert the radiances to top-of-the-atmosphere (TOA) and surface fluxes following the Earth Radiation Budget Experiment (ERBE) approach, and to average the fluxes over various time and spatial scales to produce an ERBE-like product. Spacecraft ephemeris and sensor telemetry are used with calibration coefficients to produce a chronologically ordered data product called bidirectional scan (BDS) radiances. A spatially organized instrument Earth scan product is developed for the cloud-processing subsystem. The ERBE-like inversion subsystem converts BDS radiances to unfiltered instantaneous TOA and surface fluxes. The TOA fluxes are determined by using established ERBE techniques. Hourly TOA fluxes are computed from the instantaneous values by using ERBE methods. Hourly surface fluxes are estimated from TOA fluxes by using simple parameterizations based on recent research. The averaging process produces daily, monthly-hourly, and monthly means of TOA and surface fluxes at various scales. This product provides a continuation of the ERBE record.

  19. Theoretical analysis of rolled joints

    International Nuclear Information System (INIS)

    Sinha, R.K.

    1975-01-01

    A procedure for theoretically analysing the case of an externally restrained sandwich joint formed by a hypothetical uniform hydrostatic expansion process is outlined. Reference is made to a computer program based on this theory. Results illustrating the effect of major joint variables on residual contact pressure are presented and analysed. The applicability and limitations of this theory are discussed. (author)

  20. Bulgarian ethnos according to A.Kh. Khalikov’ works: scientific concept and its theoretical bases

    Directory of Open Access Journals (Sweden)

    Izmaylov Iskander L.

    2017-02-01

    Full Text Available The article is devoted to the problems of Bulgar and Tatar ethnogenesis studied in the works of the prominent Kazan archaeologist A.Kh. Khalikov. His concept was based on the fact that a number of ethnic groups (Turkic, Finno-Ugric, and East Slavic participated in the formation of these peoples and that the key role in these processes was played by their mutual cultural influence. The concept of ethnogenesis and ethnic history of the Tatar people offered by A.Kh. Khalikov was a serious theoretical breakthrough against the background of both ideology-biased historical schemes of the Soviet era and the various nationalist ideas, differing from them by a comprehensive, integral scientific analysis of predominantly archaeological data. At present, however, when theoretical and factual bases of historical and ethnological research have considerably expanded, a number of conflicting issues have arisen in the framework of this concept, which, therefore, require new approaches to their solution.

  1. Theoretical bases and possibilities of program BRASIER for experimental data fitting and management

    International Nuclear Information System (INIS)

    Quintero, B.; Santos, J.; Garcia Yip, F.; Lopez, I.

    1992-01-01

    In the paper the theoretical bases and primary possibilities of the program BRASIER are shown. It was performed for the management and fitting of experimental data. Relevant characteristics are: Utilization of several regression methods, errors treatment, P oint-Drop Technique , multidimensional fitting, friendly interactivity, graphical possibilities and file management. The fact of using various regression methods has resulted in greater convergence possibility with respect to other similar programs that use an unique algorithm

  2. Game Theoretic Analysis of Road Traffic Problems in Nigeria ...

    African Journals Online (AJOL)

    Game Theoretic Analysis of Road Traffic Problems in Nigeria. ... problems in Nigeria are analysed in the context of a social dilemma. Game theoretic models based on the famous ... AJOL African Journals Online. HOW TO USE AJOL.

  3. A quantum theoretical study of reactions of methyldiazonium ion with DNA base pairs

    International Nuclear Information System (INIS)

    Shukla, P.K.; Ganapathy, Vinay; Mishra, P.C.

    2011-01-01

    Graphical abstract: Reactions of methyldiazonium ion at the different sites of the DNA bases in the Watson-Crick GC and AT base pairs were investigated employing density functional and second order Moller-Plesset (MP2) perturbation theories. Display Omitted Highlights: → Methylation of the DNA bases is important as it can cause mutation and cancer. → Methylation reactions of the GC and AT base pairs with CH 3 N 2 + were not studied earlier theoretically. → Experimental observations have been explained using theoretical methods. - Abstract: Methylation of the DNA bases in the Watson-Crick GC and AT base pairs by the methyldiazonium ion was investigated employing density functional and second order Moller-Plesset (MP2) perturbation theories. Methylation at the N3, N7 and O6 sites of guanine, N1, N3 and N7 sites of adenine, O2 and N3 sites of cytosine and the O2 and O4 sites of thymine were considered. The computed reactivities for methylation follow the order N7(guanine) > N3(adenine) > O6(guanine) which is in agreement with experiment. The base pairing in DNA is found to play a significant role with regard to reactivities of the different sites.

  4. Franchise Business Model: Theoretical Insights

    OpenAIRE

    Levickaitė, Rasa; Reimeris, Ramojus

    2010-01-01

    The article is based on literature review, theoretical insights, and deals with the topic of franchise business model. The objective of the paper is to analyse peculiarities of franchise business model and its developing conditions in Lithuania. The aim of the paper is to make an overview on franchise business model and its environment in Lithuanian business context. The overview is based on international and local theoretical insights. In terms of practical meaning, this article should be re...

  5. Theoretical thermal dosimetry produced by an annular phased array system in CT-based patient models

    International Nuclear Information System (INIS)

    Paulsen, K.D.; Strohbehn, J.W.; Lynch, D.R.

    1984-01-01

    Theoretical calculations for the specific absorption rate (SAR) and the resulting temperature distributions produced by an annular phased array (APA) type system are made. The finite element numerical method is used in the formulation of both the electromagnetic (EM) and the thermal boundary value problems. A number of detailed patient models based on CT-scan data from the pelvic, visceral, and thoracic regions are generated to stimulate a variety of tumor locations and surrounding normal tissues. The SAR values from the EM solution are input into the bioheat transfer equation, and steady-rate temperature distributions are calculated for a wide variety of blood flow rates. Based on theoretical modeling, the APA shows no preferential heating of superficial over deep-seated tumors. However, in most cases satisfactory thermal profiles (therapeutic volume near 60%) are obtained in all three regions of the human trunk only for tumors with little or no blood flow. Unsatisfactory temperature patterns (therapeutic volume <50%) are found for tumors with moderate to high perfusion rates. These theoretical calculations should aid the clinician in the evaluation of the effectiveness of APA type devices in heating tumors located in the trunk region

  6. Knowledge-based immunosuppressive therapy for kidney transplant patients--from theoretical model to clinical integration.

    Science.gov (United States)

    Seeling, Walter; Plischke, Max; de Bruin, Jeroen S; Schuh, Christian

    2015-01-01

    Immunosuppressive therapy is a risky necessity after a patient received a kidney transplant. To reduce risks, a knowledge-based system was developed that determines the right dosage of the immunosuppresive agent Tacrolimus. A theoretical model, to classify medication blood levels as well as medication adaptions, was created using data from almost 500 patients, and over 13.000 examinations. This model was then translated into an Arden Syntax knowledge base, and integrated directly into the hospital information system of the Vienna General Hospital. In this paper we give an overview of the construction and integration of such a system.

  7. Substituent effect on redox potential of nitrido technetium complexes with Schiff base ligand. Theoretical calculations

    International Nuclear Information System (INIS)

    Takayama, T.; Sekine, T.; Kudo, H.

    2003-01-01

    Theoretical calculations based on the density functional theory (DFT) were performed to understand the effect of substituents on the molecular and electronic structures of technetium nitrido complexes with salen type Schiff base ligands. Optimized structures of these complexes are square pyramidal. The electron density on a Tc atom of the complex with electron withdrawing substituents is lower than that of the complex with electron donating substituents. The HOMO energy is lower in the complex with electron withdrawing substituents than that in the complex with electron donating substituents. The charge on Tc atoms is a good measure that reflects the redox potential of [TcN(L)] complex. (author)

  8. Complementary Exploratory and Confirmatory Factor Analyses of the French WISC-V: Analyses Based on the Standardization Sample.

    Science.gov (United States)

    Lecerf, Thierry; Canivez, Gary L

    2017-12-28

    Interpretation of the French Wechsler Intelligence Scale for Children-Fifth Edition (French WISC-V; Wechsler, 2016a) is based on a 5-factor model including Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS). Evidence for the French WISC-V factorial structure was established exclusively through confirmatory factor analyses (CFAs). However, as recommended by Carroll (1995); Reise (2012), and Brown (2015), factorial structure should derive from both exploratory factor analysis (EFA) and CFA. The first goal of this study was to examine the factorial structure of the French WISC-V using EFA. The 15 French WISC-V primary and secondary subtest scaled scores intercorrelation matrix was used and factor extraction criteria suggested from 1 to 4 factors. To disentangle the contribution of first- and second-order factors, the Schmid and Leiman (1957) orthogonalization transformation (SLT) was applied. Overall, no EFA evidence for 5 factors was found. Results indicated that the g factor accounted for about 67% of the common variance and that the contributions of the first-order factors were weak (3.6 to 11.9%). CFA was used to test numerous alternative models. Results indicated that bifactor models produced better fit to these data than higher-order models. Consistent with previous studies, findings suggested dominance of the general intelligence factor and that users should thus emphasize the Full Scale IQ (FSIQ) when interpreting the French WISC-V. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Principles and software realization of a multimedia course on theoretical electrical engineering based on enterprise technology

    Directory of Open Access Journals (Sweden)

    Penev Krasimir

    2003-01-01

    Full Text Available The Department of Theoretical Electrical Engineering (TEE of Technical University of Sofia has been developing interactive enterprise-technologies based course on Theoretical Electrical Engineering. One side of the project is the development of multimedia teaching modules for the core undergraduate electrical engineering courses (Circuit Theory and Electromagnetic Fields and the other side is the development of Software Architecture of the web site on which modules are deployed. Initial efforts have been directed at the development of multimedia modules for the subject Electrical Circuits and on developing the web site structure. The objective is to develop teaching materials that will enhance lectures and laboratory exercises and will allow computerized examinations on the subject. This article outlines the framework used to develop the web site structure, the Circuit Theory teaching modules, and the strategy of their use as teaching tool.

  10. Theoretical Bound of CRLB for Energy Efficient Technique of RSS-Based Factor Graph Geolocation

    Science.gov (United States)

    Kahar Aziz, Muhammad Reza; Heriansyah; Saputra, EfaMaydhona; Musa, Ardiansyah

    2018-03-01

    To support the increase of wireless geolocation development as the key of the technology in the future, this paper proposes theoretical bound derivation, i.e., Cramer Rao lower bound (CRLB) for energy efficient of received signal strength (RSS)-based factor graph wireless geolocation technique. The theoretical bound derivation is crucially important to evaluate whether the energy efficient technique of RSS-based factor graph wireless geolocation is effective as well as to open the opportunity to further innovation of the technique. The CRLB is derived in this paper by using the Fisher information matrix (FIM) of the main formula of the RSS-based factor graph geolocation technique, which is lied on the Jacobian matrix. The simulation result shows that the derived CRLB has the highest accuracy as a bound shown by its lowest root mean squared error (RMSE) curve compared to the RMSE curve of the RSS-based factor graph geolocation technique. Hence, the derived CRLB becomes the lower bound for the efficient technique of RSS-based factor graph wireless geolocation.

  11. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  12. Theoretical Aspects of Cross-border Integration-based Economic Cooperation

    Directory of Open Access Journals (Sweden)

    Bilchak V.

    2014-09-01

    Full Text Available In this article the author analyses theoretical aspects of border economy in the conditions of modern processes of integration. The author describes the existing schools and concepts of integration stressing the role of government regulation relating to the deformations in the development of the world economic mechanism. Modern studies focus on the evolution of integration processes, which has largely affected the key elements of the world economic mechanism from classical political economy, monopoly regulation. This resulted in monopolistic competition, imperfect competition, and oligopoly – largely, through all fields and poles of economic growth to certain elements of government regulation and social reproduction on the international scale. The author examines the key elements and stages of economic integration. These stages assume a number of consecutive forms: free trade zone, customs union, common market, complete economic integration, and economic union. The article shows that the transition occurs from the lowest to the highest stages — from the processes of integration involving, firstly, trade market and then capital and labour markets to the integration of social sphere. The theoretical aspects of all these transformations can be easily traced in the case of EU integration processes.

  13. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  14. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    Science.gov (United States)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  15. Theoretical strengthening of the concept of appealing in analysed sermons on Matthew 25:31–46 in the context of poverty in South Africa

    Directory of Open Access Journals (Sweden)

    Hennie J.C. Pieterse

    2013-08-01

    Full Text Available From a qualitative grounded theory analysis in a sample of 26 sermons with Matthew 25:31–46 as sermon text, a rhetorical structure of how the preachers try to convince their listeners to care for the poor emerged. The homiletical concept of appealing related to all the categories borne out of the analysis of the inner world of the 26 sermons, and also to the categories showing this rhetorical structure in the sermons. The article discusses what the dimensions are in the concept of appealing borne out of the sermons in which the rhetorical structure was apparent, which rhetorical theory would fit as theoretical base for the concept of appealing in its relationship with the rhetorical structure in the sermons, and what dilemma the preachers face when they try to convince their listeners to participate in the care for the poor. The rhetorical theory of deliberative rhetoric (Aristotle and the classical theory with the three dimensions logos, ethos and pathosis discussed in this article as theoretical thickening of the concept of appealing to the listeners of the sermons. This article attempts to demonstrate how to go about theorising from a grounded theory analysis of sermons with Matthew 25:31–46 as a sermon text with, as result, a theory that could help preachers in preaching from this text in the context of poverty in South Africa. Vanuit ’n kwalitatief-gegronde teorie-ontleding (grounded theory analysis van 26 preke met Matteus 25:31–46 as preekteks, het ’n retoriese struktuur na vore gekom waarmee predikers hulle toehoorders wil oorreed om armes te versorg. Die homiletiese konsep van appèl, hou verband met al die kategorieë wat uit die inhoudsanalise van die binnewêreld van die 26 preke na vore gekom het, asook die kategorieë waarin die retoriese struktuur sigbaar is. Die artikel bespreek die dimensies in die konsep van appèl wat na vore kom uit die preke waarin die retoriese struktuur duidelik is, welke retoriese teorie as

  16. Advanced exergy-based analyses applied to a system including LNG regasification and electricity generation

    Energy Technology Data Exchange (ETDEWEB)

    Morosuk, Tatiana; Tsatsaronis, George; Boyano, Alicia; Gantiva, Camilo [Technische Univ. Berlin (Germany)

    2012-07-01

    Liquefied natural gas (LNG) will contribute more in the future than in the past to the overall energy supply in the world. The paper discusses the application of advanced exergy-based analyses to a recently developed LNG-based cogeneration system. These analyses include advanced exergetic, advanced exergoeconomic, and advanced exergoenvironmental analyses in which thermodynamic inefficiencies (exergy destruction), costs, and environmental impacts have been split into avoidable and unavoidable parts. With the aid of these analyses, the potentials for improving the thermodynamic efficiency and for reducing the overall cost and the overall environmental impact are revealed. The objectives of this paper are to demonstrate (a) the potential for generating electricity while regasifying LNG and (b) some of the capabilities associated with advanced exergy-based methods. The most important subsystems and components are identified, and suggestions for improving them are made. (orig.)

  17. The neural mediators of kindness-based meditation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Jennifer Streiffer Mascaro

    2015-02-01

    Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  18. Training-Based Interventions in Motor Rehabilitation after Stroke: Theoretical and Clinical Considerations

    Directory of Open Access Journals (Sweden)

    Annette Sterr

    2004-01-01

    Full Text Available Basic neuroscience research on brain plasticity, motor learning and recovery has stimulated new concepts in neurological rehabilitation. Combined with the development of set methodological standards in clinical outcome research, these findings have led to a double-paradigm shift in motor rehabilitation: (a the move towards evidence-based procedures for the assessment of clinical outcome & the employment of disablement models to anchor outcome parameters, and (b the introduction of practice-based concepts that are derived from testable models that specify treatment mechanisms. In this context, constraint-induced movement therapy (CIT has played a catalytic role in taking motor rehabilitation forward into the scientific arena. As a theoretically founded and hypothesis-driven intervention, CIT research focuses on two main issues. The first issue is the assessment of long-term clinical benefits in an increasing range of patient groups, and the second issue is the investigation of neuronal and behavioural treatment mechanisms and their interactive contribution to treatment success. These studies are mainly conducted in the research environment and will eventually lead to increased treatment benefits for patients in standard health care. However, gradual but presumably more immediate benefits for patients may be achieved by introducing and testing derivates of the CIT concept that are more compatible with current clinical practice. Here, we summarize the theoretical and empirical issues related to the translation of research-based CIT work into the clinical context of standard health care.

  19. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  20. Theoretical analysis and experimental evaluation of a CsI(Tl) based electronic portal imaging system

    International Nuclear Information System (INIS)

    Sawant, Amit; Zeman, Herbert; Samant, Sanjiv; Lovhoiden, Gunnar; Weinberg, Brent; DiBianca, Frank

    2002-01-01

    This article discusses the design and analysis of a portal imaging system based on a thick transparent scintillator. A theoretical analysis using Monte Carlo simulation was performed to calculate the x-ray quantum detection efficiency (QDE), signal to noise ratio (SNR) and the zero frequency detective quantum efficiency [DQE(0)] of the system. A prototype electronic portal imaging device (EPID) was built, using a 12.7 mm thick, 20.32 cm diameter, CsI(Tl) scintillator, coupled to a liquid nitrogen cooled CCD TV camera. The system geometry of the prototype EPID was optimized to achieve high spatial resolution. The experimental evaluation of the prototype EPID involved the determination of contrast resolution, depth of focus, light scatter and mirror glare. Images of humanoid and contrast detail phantoms were acquired using the prototype EPID and were compared with those obtained using conventional and high contrast portal film and a commercial EPID. A theoretical analysis was also carried out for a proposed full field of view system using a large area, thinned CCD camera and a 12.7 mm thick CsI(Tl) crystal. Results indicate that this proposed design could achieve DQE(0) levels up to 11%, due to its order of magnitude higher QDE compared to phosphor screen-metal plate based EPID designs, as well as significantly higher light collection compared to conventional TV camera based systems

  1. Theoretical study of solar combisystems based on bikini tanks and tank-in-tank stores

    DEFF Research Database (Denmark)

    Yazdanshenas, Eshagh; Furbo, Simon

    2012-01-01

    . Originality/value - Many different Solar Combisystem designs have been commercialized over the years. In the IEA-SHC Task 26, twenty one solar combisystems have been described and analyzed. Maybe the mantle tank approach also for solar combisystems can be used with advantage? This might be possible...... if the solar heating system is based on a so called bikini tank. Therefore the new developed solar combisystems based on bikini tanks is compared to the tank-in-tank solar combisystems to elucidate which one is suitable for three different houses with low energy heating demand, medium and high heating demand.......Purpose - Low flow bikini solar combisystems and high flow tank-in-tank solar combisystems have been studied theoretically. The aim of the paper is to study which of these two solar combisystem designs is suitable for different houses. The thermal performance of solar combisystems based on the two...

  2. GIS-based Approaches to Catchment Area Analyses of Mass Transit

    DEFF Research Database (Denmark)

    Andersen, Jonas Lohmann Elkjær; Landex, Alex

    2009-01-01

    Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...

  3. Meta-Analyses of Human Cell-Based Cardiac Regeneration Therapies

    DEFF Research Database (Denmark)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Navarese, Eliano P

    2016-01-01

    In contrast to multiple publication-based meta-analyses involving clinical cardiac regeneration therapy in patients with recent myocardial infarction, a recently published meta-analysis based on individual patient data reported no effect of cell therapy on left ventricular function or clinical...

  4. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  5. Theoretical and experimental studies on ionic currents in nanopore-based biosensors.

    Science.gov (United States)

    Liu, Lei; Li, Chu; Ma, Jian; Wu, Yingdong; Ni, Zhonghua; Chen, Yunfei

    2014-12-01

    Novel generation of analytical technology based on nanopores has provided possibilities to fabricate nanofluidic devices for low-cost DNA sequencing or rapid biosensing. In this paper, a simplified model was suggested to describe DNA molecule's translocation through a nanopore, and the internal potential, ion concentration, ionic flowing speed and ionic current in nanopores with different sizes were theoretically calculated and discussed on the basis of Poisson-Boltzmann equation, Navier-Stokes equation and Nernst-Planck equation by considering several important parameters, such as the applied voltage, the thickness and the electric potential distributions in nanopores. In this way, the basic ionic currents, the modulated ionic currents and the current drops induced by translocation were obtained, and the size effects of the nanopores were carefully compared and discussed based on the calculated results and experimental data, which indicated that nanopores with a size of 10 nm or so are more advantageous to achieve high quality ionic current signals in DNA sensing.

  6. Graph theoretical analysis and application of fMRI-based brain network in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    LIU Xue-na

    2012-08-01

    Full Text Available Alzheimer's disease (AD, a progressive neurodegenerative disease, is clinically characterized by impaired memory and many other cognitive functions. However, the pathophysiological mechanisms underlying the disease are not thoroughly understood. In recent years, using functional magnetic resonance imaging (fMRI as well as advanced graph theory based network analysis approach, several studies of patients with AD suggested abnormal topological organization in both global and regional properties of functional brain networks, specifically, as demonstrated by a loss of small-world network characteristics. These studies provide novel insights into the pathophysiological mechanisms of AD and could be helpful in developing imaging biomarkers for disease diagnosis. In this paper we introduce the essential concepts of complex brain networks theory, and review recent advances of the study on human functional brain networks in AD, especially focusing on the graph theoretical analysis of small-world network based on fMRI. We also propound the existent problems and research orientation.

  7. Activity – based costing in sport organizations:Theoretical background & future prospects

    Directory of Open Access Journals (Sweden)

    PANAGIOTIS E. DIMITROPOULOS

    2007-01-01

    Full Text Available Costing systems in recent years have shown a significantdevelopment and activity-based costing (ABC specificallyhas been considered as a major contribution to cost management, particularly in service businesses. The sport sector is composed to a great extent of service functions, yet considerably less have been reported of the use of activity based costing to support cost management in sport organizations. Since the power of information becomes continuously crucial for the implementation of effective business administration, the traditional methods of cost measurementproved insufficient on this issue, leading to the invention ofABC. The aim of this paper is twofold. First of all we wantto present the main theoretical background of ABC and itssubstantiated benefits, and secondly to present some practical steps for the implementation of ABC in sport organizations.

  8. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  9. Metamaterial-based theoretical description of light scattering by metallic nano-hole array structures

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Mahi R. [Department of Physics and Astronomy, University of Western Ontario, London N6A 3K7 (Canada); Najiminaini, Mohamadreza; Carson, Jeffrey J. L. [Lawson Health Research Institute, St. Joseph' s Health Care, 268 Grosvenor Street, London N6A 4V2 (Canada); Department of Medical Biophysics, University of Western Ontario, London N6A 3K7 (Canada); Balakrishnan, Shankar [Department of Physics and Astronomy, University of Western Ontario, London N6A 3K7 (Canada); Lawson Health Research Institute, St. Joseph' s Health Care, 268 Grosvenor Street, London N6A 4V2 (Canada); Department of Medical Biophysics, University of Western Ontario, London N6A 3K7 (Canada)

    2015-05-14

    We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.

  10. Theoretical rationalization for reduced charge recombination in bulky carbazole-based sensitizers in solar cells.

    Science.gov (United States)

    Surakhot, Yaowarat; Laszlo, Viktor; Chitpakdee, Chirawat; Promarak, Vinich; Sudyoadsuk, Taweesak; Kungwan, Nawee; Kowalczyk, Tim; Irle, Stephan; Jungsuttiwong, Siriporn

    2017-05-05

    The search for greater efficiency in organic dye-sensitized solar cells (DSCs) and in their perovskite cousins is greatly aided by a more complete understanding of the spectral and morphological properties of the photoactive layer. This investigation resolves a discrepancy in the observed photoconversion efficiency (PCE) of two closely related DSCs based on carbazole-containing D-π-A organic sensitizers. Detailed theoretical characterization of the absorption spectra, dye adsorption on TiO 2 , and electronic couplings for charge separation and recombination permit a systematic determination of the origin of the difference in PCE. Although the two dyes produce similar spectral features, ground- and excited-state density functional theory (DFT) simulations reveal that the dye with the bulkier donor group adsorbs more strongly to TiO 2 , experiences limited π-π aggregation, and is more resistant to loss of excitation energy via charge recombination on the dye. The effects of conformational flexibility on absorption spectra and on the electronic coupling between the bright exciton and charge-transfer states are revealed to be substantial and are characterized through density-functional tight-binding (DFTB) molecular dynamics sampling. These simulations offer a mechanistic explanation for the superior open-circuit voltage and short-circuit current of the bulky-donor dye sensitizer and provide theoretical justification of an important design feature for the pursuit of greater photocurrent efficiency in DSCs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. A Game-theoretic Framework for Network Coding Based Device-to-Device Communications

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Tembine, Hamidou; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    This paper investigates the delay minimization problem for instantly decodable network coding (IDNC) based deviceto- device (D2D) communications. In D2D enabled systems, users cooperate to recover all their missing packets. The paper proposes a game theoretic framework as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. The session is modeled by self-interested players in a non-cooperative potential game. The utility functions are designed so as increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash equilibrium. Three games are developed whose first reduces the completion time, the second the maximum decoding delay and the third the sum decoding delay. The paper, further, improves the formulations by including a punishment policy upon collision occurrence so as to achieve the Nash bargaining solution. Learning algorithms are proposed for systems with complete and incomplete information, and for the imperfect feedback scenario. Numerical results suggest that the proposed game-theoretical formulation provides appreciable performance gain against the conventional point-to-multipoint (PMP), especially for reliable user-to-user channels.

  12. Feasibility of theoretical formulas on the anisotropy of shale based on laboratory measurement and error analysis

    Science.gov (United States)

    Xie, Jianyong; Di, Bangrang; Wei, Jianxin; Luan, Xinyuan; Ding, Pinbo

    2015-04-01

    This paper designs a total angle ultrasonic test method to measure the P-wave velocities (vp), vertically and horizontally polarized shear wave velocities (vsv and vsh) of all angles to the bedding plane on different kinds of strong anisotropic shale. Analysis has been made of the comparisons among the observations and corresponding calculated theoretical curves based on the varied vertical transversely isotropic (TI) medium theories, for which discussing the real similarity with the characterizations of the TI medium on the scope of dynamic behaviors, and further conclude a more accurate and precise theory from the varied theoretical formulas as well as its suitable range to characterize the strong anisotropy of shale. At a low phase angle (theta Berryman expressions provide a relatively much better agreement with the measured data for vp, vsv on shale. Also all of the three theories lead to more deviations in the approximation of the vsv than for the vp and vsh. Furthermore, we created synthetic comparative ideal physical models (from coarse bakelite, cambric bakelite, and paper bakelite) as supplementary models to natural shale, which are used to model shale with different anisotropy, to research the effects of the anisotropic parameters upon the applicability of the former optimal TI theories, especially for the vsv. We found the when the P-wave anisotropy, S-wave anisotropy ɛ, γ > 0.25, the Berrryman curve will be the best fit for the vp, vsv on shale.

  13. A Game-theoretic Framework for Network Coding Based Device-to-Device Communications

    KAUST Repository

    Douik, Ahmed

    2016-06-29

    This paper investigates the delay minimization problem for instantly decodable network coding (IDNC) based deviceto- device (D2D) communications. In D2D enabled systems, users cooperate to recover all their missing packets. The paper proposes a game theoretic framework as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. The session is modeled by self-interested players in a non-cooperative potential game. The utility functions are designed so as increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash equilibrium. Three games are developed whose first reduces the completion time, the second the maximum decoding delay and the third the sum decoding delay. The paper, further, improves the formulations by including a punishment policy upon collision occurrence so as to achieve the Nash bargaining solution. Learning algorithms are proposed for systems with complete and incomplete information, and for the imperfect feedback scenario. Numerical results suggest that the proposed game-theoretical formulation provides appreciable performance gain against the conventional point-to-multipoint (PMP), especially for reliable user-to-user channels.

  14. Applications of high lateral and energy resolution imaging XPS with a double hemispherical analyser based spectromicroscope

    International Nuclear Information System (INIS)

    Escher, M.; Winkler, K.; Renault, O.; Barrett, N.

    2010-01-01

    The design and applications of an instrument for imaging X-ray photoelectron spectroscopy (XPS) are reviewed. The instrument is based on a photoelectron microscope and a double hemispherical analyser whose symmetric configuration avoids the spherical aberration (α 2 -term) inherent for standard analysers. The analyser allows high transmission imaging without sacrificing the lateral and energy resolution of the instrument. The importance of high transmission, especially for highest resolution imaging XPS with monochromated laboratory X-ray sources, is outlined and the close interrelation of energy resolution, lateral resolution and analyser transmission is illustrated. Chemical imaging applications using a monochromatic laboratory Al Kα-source are shown, with a lateral resolution of 610 nm. Examples of measurements made using synchrotron and laboratory ultra-violet light show the broad field of applications from imaging of core level electrons with chemical shift identification, high resolution threshold photoelectron emission microscopy (PEEM), work function imaging and band structure imaging.

  15. Theoretical approach to cell-impedance-controlled lithium transport through Li1-δMn2O4 film electrode with partially inactive fractal surface by analyses of potentiostatic current transient and linear sweep voltammogram

    International Nuclear Information System (INIS)

    Jung, Kyu-Nam; Pyun, Su-Il

    2007-01-01

    Lithium transport through the partially inactive fractal Li 1-δ Mn 2 O 4 film electrode under the cell-impedance-controlled constraint was theoretically investigated by using the kinetic Monte Carlo method based upon random walk approach. Under the cell-impedance-controlled constraint, all the potentiostatic current transients calculated from the totally active and partially inactive fractal electrodes hardly exhibited the generalised Cottrell behaviour and they were significantly affected in shape by the interfacial charge-transfer kinetics. In the case of the linear sweep voltammogram determined from the totally active and partially inactive fractal electrodes, all the power dependence of the peak current on the scan rate above the characteristic scan rate deviated from the generalised Randles-Sevcik behaviour. From the analyses of the current transients and the linear sweep voltammograms simulated with various values of the simulation parameters, it was further recognised that the cell-impedance-controlled lithium transport through the partially inactive fractal Li 1-δ Mn 2 O 4 film electrode strongly deviates from the generalised diffusion-controlled transport behaviour of the electrode with the totally active surface, which is attributed to the impeded interfacial charge-transfer kinetics governed by the surface inhomogeneities including the fractal dimension of the surface and the surface coverage by active sites and by the kinetic parameters including the internal cell resistance

  16. Theoretical and experimental determination of mass attenuation coefficients of lead-based ceramics and their comparison with simulation

    Directory of Open Access Journals (Sweden)

    Vejdani-Noghreiyan Alireza

    2016-01-01

    Full Text Available Mass attenuation coefficient of lead-based ceramics have been measured by experimental methods and compared with theoretical and Monte Carlo simulation results. Lead-based ceramics were prepared using mixed oxide method and the X-ray diffraction analysis was done to evaluate the crystal structure of the produced handmade ceramics. The experimental results show good agreement with theoretical and simulation results. However at two gamma ray energies, small differences between experimental and theoretical results have been observed. By adding other additives to ceramics and observing the changes in the shielding properties such as flexibility, one can synthesize and optimize ceramics as a neutron shield.

  17. Theoretical bases of project management in conditions of innovative economy based on fuzzy modeling

    Science.gov (United States)

    Beilin, I. L.; Khomenko, V. V.

    2018-05-01

    In recent years, more and more Russian enterprises (both private and public) are trying to organize their activities on the basis of modern scientific research in order to improve the management of economic processes. Business planning, financial and investment analysis, modern software products based on the latest scientific developments are introduced everywhere. At the same time, there is a growing demand for market research (both at the microeconomic and macroeconomic levels), for financial and general economic information.

  18. Analysis of theoretical security level of PDF Encryption mechanism based on X.509 certificates

    Directory of Open Access Journals (Sweden)

    Joanna Dmitruk

    2017-12-01

    Full Text Available PDF Encryption is a content security mechanism developed and used by Adobe in their products. In this paper, we have checked a theoretical security level of a variant that uses public key infrastructure and X.509 certificates. We have described a basis of this mechanism and we have performed a simple security analysis. Then, we have showed possible tweaks and security improvements. At the end, we have given some recommendations that can improve security of a content secured with PDF Encryption based on X.509 certificates. Keywords: DRM, cryptography, security level, PDF Encryption, Adobe, X.509

  19. The theoretical model of the school-based prevention programme Unplugged.

    Science.gov (United States)

    Vadrucci, Serena; Vigna-Taglianti, Federica D; van der Kreeft, Peer; Vassara, Maro; Scatigna, Maria; Faggiano, Fabrizio; Burkhart, Gregor

    2016-12-01

    Unplugged is a school-based prevention programme designed and tested in the EU-Dap trial. The programme consists of 12 units delivered by class teachers to adolescents 12-14 years old. It is a strongly interactive programme including a training of personal and social skills with a specific focus on normative beliefs. The aim of this work is to define the theoretical model of the program, the contribution of the theories to the units, and the targeted mediators. The programme integrates several theories: Social Learning, Social Norms, Health Belief, theory of Reasoned Action-Attitude, and Problem Behaviour theory. Every theory contributes to the development of the units' contents, with specific weights. Knowledge, risk perception, attitudes towards drugs, normative beliefs, critical and creative thinking, relationship skills, communication skills, assertiveness, refusal skills, ability to manage emotions and to cope with stress, empathy, problem solving and decision making skills are the targeted mediators of the program. © The Author(s) 2015.

  20. A Theoretical Analysis of the Mission Statement Based on the Axiological Approach

    Directory of Open Access Journals (Sweden)

    Marius-Costel EŞI

    2016-12-01

    Full Text Available The aim of this work is focused on a theoretical analysis of formulating the mission statement of business organizations in relation to the idea of the organizational axiological core. On one hand, we consider the CSR-Corporate Social Responsibility which, in our view, must be brought into direct connection both with the moral entrepreneurship (which should support the philosophical perspective of the statement of business organizations mission and the purely economic entrepreneurship based on profit maximization (which should support the pragmatic perspective. On the other hand, an analysis of the moral concepts which should underpin business is becoming fundamental, in our view, as far as the idea of the social specific value of the social entrepreneurship is evidenced. Therefore, our approach highlights a number of epistemic explanations in relation to the actual practice dimension.

  1. ANTI-CRISIS MANAGEMENT IN CONTEXT OF ITS THEORETICAL AND METHODOLOGICAL RESEARCH BASES

    Directory of Open Access Journals (Sweden)

    A. V. Trapitsyn

    2011-01-01

    Full Text Available Effective country and enterprise economic management determines survival of dozens thousand Russian enterprises under market economics conditions any time, but management on the basis of marketing elements gains particular importance during crises. Anti-crisis management is a complex preventive management model created and functioning to neutralize or mitigate crisis phenomena. Different anti-crisis management theoretical and practical concepts used in the world are discussed and compared along with the management research approaches. Described in the article is author’s approach to anti-crisis enterprise management based on the need for enterprise to study relations between certain marketing elements and project efficiency indices and to take them into account.

  2. Basing assessment and treatment of problem behavior on behavioral momentum theory: Analyses of behavioral persistence.

    Science.gov (United States)

    Schieltz, Kelly M; Wacker, David P; Ringdahl, Joel E; Berg, Wendy K

    2017-08-01

    The connection, or bridge, between applied and basic behavior analysis has been long-established (Hake, 1982; Mace & Critchfield, 2010). In this article, we describe how clinical decisions can be based more directly on behavioral processes and how basing clinical procedures on behavioral processes can lead to improved clinical outcomes. As a case in point, we describe how applied behavior analyses of maintenance, and specifically the long-term maintenance of treatment effects related to problem behavior, can be adjusted and potentially enhanced by basing treatment on Behavioral Momentum Theory. We provide a brief review of the literature including descriptions of two translational studies that proposed changes in how differential reinforcement of alternative behavior treatments are conducted based on Behavioral Momentum Theory. We then describe current clinical examples of how these translations are continuing to impact the definitions, designs, analyses, and treatment procedures used in our clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Theoretical analysis of a YBCO squirrel-cage type induction motor based on an equivalent circuit

    International Nuclear Information System (INIS)

    Morita, G; Nakamura, T; Muta, I

    2006-01-01

    A HTS induction motor, with a HTS squirrel-cage rotor, is analysed using an electrical equivalent circuit. The squirrel-cage winding in the rotor consists of rotor bars and end rings, and both are considered to be made of YBCO film conductors. A wide range of electric field versus current density in YBCO film is formulated based on the Weibull function, and analysed as a non-linear resistance in the equivalent circuit. It is shown that starting and accelerating torques of the HTS induction motor are improved drastically compared to those of a conventional induction motor. Furthermore, large synchronous torque can also be realized by trapping the magnetic flux in the rotor circuit because of the persistent current mode

  4. Theoretical and numerical studies on the transport of transverse beam quality in plasma-based accelerators

    International Nuclear Information System (INIS)

    Mehrling, Timon Johannes

    2014-11-01

    This work examines effects, which impact the transverse quality of electron-beams in plasma-based accelerators, by means of theoretical and numerical methods. Plasma-based acceleration is a promising candidate for future particle accelerator technologies. In plasma-based acceleration, highly intense laser beams or high-current relativistic particle beams are focused into a plasma to excite plasma-waves with extreme transverse and longitudinal electric fields. The amplitude of these fields exceed with 10-100 GV/m the ones in today's radio-frequency accelerators by several orders of magnitude, hence, in principle allowing for accordingly shorter and cheaper accelerators based on plasma. Despite the tremendous progress in the recent decade, beams from plasma accelerators are not yet achieving the quality as demanded for pivotal applications of relativistic electron-beams, e.g. free-electron lasers (FELs).Studies within this work examine how the quality can be optimized in the production of the beams and preserved during the acceleration and transport to the interaction region. Such studies cannot be approached purely analytical but necessitate numerical methods, such as the Particle-In-Cell (PIC) method, which can model kinetic, electrodynamic and relativistic plasma phenomena. However, this method is computationally too expensive for parameter-scans in three-dimensional geometries. Hence, a quasi-static PIC code was developed in connection with this work, which is significantly more effective than the full PIC method for a class of problems in plasma-based acceleration.The evolution of the emittance of beams which are injected into plasma modules was studied in this work by means of theoretical and the above numerical methods. It was shown that the beam parameters need to be matched accurately into the focusing plasma-channel in order to allow for beam-quality preservation. This suggested that new extraction and injection-techniques are required in staged plasma

  5. Theoretical Analysis on Mechanical Deformation of Membrane-Based Photomask Blanks

    Science.gov (United States)

    Marumoto, Kenji; Aya, Sunao; Yabe, Hedeki; Okada, Tatsunori; Sumitani, Hiroaki

    2012-04-01

    Membrane-based photomask is used in proximity X-ray lithography including that in LIGA (Lithographie, Galvanoformung und Abformung) process, and near-field photolithography. In this article, out-of-plane deformation (OPD) and in-plane displacement (IPD) of membrane-based photomask blanks are theoretically analyzed to obtain the mask blanks with flat front surface and low stress absorber film. First, we derived the equations of OPD and IPD for the processing steps of membrane-based photomask such as film deposition, back-etching and bonding, using a theory of symmetrical bending of circular plates with a coaxial circular hole and that of deformation of cylinder under hydrostatic pressure. The validity of the equations was proved by comparing the calculation results with experimental ones. Using these equations, we investigated the relation between the geometry of the mask blanks and the distortions generally, and gave the criterion to attain the flat front surface. Moreover, the absorber stress-bias required to obtain zero-stress on finished mask blanks was also calculated and it has been found that only little stress-bias was required for adequate hole size of support plate.

  6. PCA-based algorithm for calibration of spectrophotometric analysers of food

    International Nuclear Information System (INIS)

    Morawski, Roman Z; Miekina, Andrzej

    2013-01-01

    Spectrophotometric analysers of food, being instruments for determination of the composition of food products and ingredients, are today of growing importance for food industry, as well as for food distributors and consumers. Their metrological performance significantly depends of the numerical performance of available means for spectrophotometric data processing; in particular – the means for calibration of analysers. In this paper, a new algorithm for this purpose is proposed, viz. the algorithm using principal components analysis (PCA). It is almost as efficient as PLS-based algorithms of calibration, but much simpler

  7. A Server-Client-Based Graphical Development Environment for Physics Analyses (VISPA)

    International Nuclear Information System (INIS)

    Bretz, H-P; Erdmann, M; Fischer, R; Hinzmann, A; Klingebiel, D; Komm, M; Müller, G; Rieger, M; Steffens, J; Steggemann, J; Urban, M; Winchen, T

    2012-01-01

    The Visual Physics Analysis (VISPA) project provides a graphical development environment for data analysis. It addresses the typical development cycle of (re-)designing, executing, and verifying an analysis. We present the new server-client-based web application of the VISPA project to perform physics analyses via a standard internet browser. This enables individual scientists to work with a large variety of devices including touch screens, and teams of scientists to share, develop, and execute analyses on a server via the web interface.

  8. Automatic image-based analyses using a coupled quadtree-SBFEM/SCM approach

    Science.gov (United States)

    Gravenkamp, Hauke; Duczek, Sascha

    2017-10-01

    Quadtree-based domain decomposition algorithms offer an efficient option to create meshes for automatic image-based analyses. Without introducing hanging nodes the scaled boundary finite element method (SBFEM) can directly operate on such meshes by only discretizing the edges of each subdomain. However, the convergence of a numerical method that relies on a quadtree-based geometry approximation is often suboptimal due to the inaccurate representation of the boundary. To overcome this problem a combination of the SBFEM with the spectral cell method (SCM) is proposed. The basic idea is to treat each uncut quadtree cell as an SBFEM polygon, while all cut quadtree cells are computed employing the SCM. This methodology not only reduces the required number of degrees of freedom but also avoids a two-dimensional quadrature in all uncut quadtree cells. Numerical examples including static, harmonic, modal and transient analyses of complex geometries are studied, highlighting the performance of this novel approach.

  9. Design by theoretical and CFD analyses of a multi-blade screw pump evolving liquid lead for a Generation IV LFR

    Energy Technology Data Exchange (ETDEWEB)

    Ferrini, Marcello [GeNERG - DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); Borreani, Walter [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 16152 Genova (Italy); INFN, Via Dodecaneso 33, 16146 Genova (Italy); Lomonaco, Guglielmo, E-mail: guglielmo.lomonaco@unige.it [GeNERG - DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); INFN, Via Dodecaneso 33, 16146 Genova (Italy); Magugliani, Fabrizio [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 16152 Genova (Italy)

    2016-02-15

    Lead-cooled fast reactor (LFR) has both a long history and a penchant of innovation. With early work related to its use for submarine propulsion dating to the 1950s, Russian scientists pioneered the development of reactors cooled by heavy liquid metals (HLM). More recently, there has been substantial interest in both critical and subcritical reactors cooled by lead (Pb) or lead–bismuth eutectic (LBE), not only in Russia, but also in Europe, Asia, and the USA. The growing knowledge of the thermal-fluid-dynamic properties of these fluids and the choice of the LFR as one of the six reactor types selected by Generation IV International Forum (GIF) for further research and development has fostered the exploration of new geometries and new concepts aimed at optimizing the key components that will be adopted in the Advanced Lead Fast Reactor European Demonstrator (ALFRED), the 300 MW{sub t} pool-type reactor aimed at proving the feasibility of the design concept adopted for the European Lead-cooled Fast Reactor (ELFR). In this paper, a theoretical and computational analysis is presented of a multi-blade screw pump evolving liquid Lead as primary pump for the adopted reference conceptual design of ALFRED. The pump is at first analyzed at design operating conditions from the theoretical point of view to determine the optimal geometry according to the velocity triangles and then modeled with a 3D CFD code (ANSYS CFX). The choice of a 3D simulation is dictated by the need to perform a detailed spatial simulation taking into account the peculiar geometry of the pump as well as the boundary layers and turbulence effects of the flow, which are typically tri-dimensional. The use of liquid Lead impacts significantly the fluid dynamic design of the pump because of the key requirement to avoid any erosion affects. These effects have a major impact on the performance, reliability and lifespan of the pump. Albeit some erosion-related issues remain to be fully addressed, the results

  10. Design by theoretical and CFD analyses of a multi-blade screw pump evolving liquid lead for a Generation IV LFR

    International Nuclear Information System (INIS)

    Ferrini, Marcello; Borreani, Walter; Lomonaco, Guglielmo; Magugliani, Fabrizio

    2016-01-01

    Lead-cooled fast reactor (LFR) has both a long history and a penchant of innovation. With early work related to its use for submarine propulsion dating to the 1950s, Russian scientists pioneered the development of reactors cooled by heavy liquid metals (HLM). More recently, there has been substantial interest in both critical and subcritical reactors cooled by lead (Pb) or lead–bismuth eutectic (LBE), not only in Russia, but also in Europe, Asia, and the USA. The growing knowledge of the thermal-fluid-dynamic properties of these fluids and the choice of the LFR as one of the six reactor types selected by Generation IV International Forum (GIF) for further research and development has fostered the exploration of new geometries and new concepts aimed at optimizing the key components that will be adopted in the Advanced Lead Fast Reactor European Demonstrator (ALFRED), the 300 MW t pool-type reactor aimed at proving the feasibility of the design concept adopted for the European Lead-cooled Fast Reactor (ELFR). In this paper, a theoretical and computational analysis is presented of a multi-blade screw pump evolving liquid Lead as primary pump for the adopted reference conceptual design of ALFRED. The pump is at first analyzed at design operating conditions from the theoretical point of view to determine the optimal geometry according to the velocity triangles and then modeled with a 3D CFD code (ANSYS CFX). The choice of a 3D simulation is dictated by the need to perform a detailed spatial simulation taking into account the peculiar geometry of the pump as well as the boundary layers and turbulence effects of the flow, which are typically tri-dimensional. The use of liquid Lead impacts significantly the fluid dynamic design of the pump because of the key requirement to avoid any erosion affects. These effects have a major impact on the performance, reliability and lifespan of the pump. Albeit some erosion-related issues remain to be fully addressed, the results of

  11. Coalescent-based genome analyses resolve the early branches of the euarchontoglires.

    Directory of Open Access Journals (Sweden)

    Vikas Kumar

    Full Text Available Despite numerous large-scale phylogenomic studies, certain parts of the mammalian tree are extraordinarily difficult to resolve. We used the coding regions from 19 completely sequenced genomes to study the relationships within the super-clade Euarchontoglires (Primates, Rodentia, Lagomorpha, Dermoptera and Scandentia because the placement of Scandentia within this clade is controversial. The difficulty in resolving this issue is due to the short time spans between the early divergences of Euarchontoglires, which may cause incongruent gene trees. The conflict in the data can be depicted by network analyses and the contentious relationships are best reconstructed by coalescent-based analyses. This method is expected to be superior to analyses of concatenated data in reconstructing a species tree from numerous gene trees. The total concatenated dataset used to study the relationships in this group comprises 5,875 protein-coding genes (9,799,170 nucleotides from all orders except Dermoptera (flying lemurs. Reconstruction of the species tree from 1,006 gene trees using coalescent models placed Scandentia as sister group to the primates, which is in agreement with maximum likelihood analyses of concatenated nucleotide sequence data. Additionally, both analytical approaches favoured the Tarsier to be sister taxon to Anthropoidea, thus belonging to the Haplorrhine clade. When divergence times are short such as in radiations over periods of a few million years, even genome scale analyses struggle to resolve phylogenetic relationships. On these short branches processes such as incomplete lineage sorting and possibly hybridization occur and make it preferable to base phylogenomic analyses on coalescent methods.

  12. THE GOAL OF VALUE-BASED MEDICINE ANALYSES: COMPARABILITY. THE CASE FOR NEOVASCULAR MACULAR DEGENERATION

    Science.gov (United States)

    Brown, Gary C.; Brown, Melissa M.; Brown, Heidi C.; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    Purpose To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). Methods A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Results Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy

  13. The goal of value-based medicine analyses: comparability. The case for neovascular macular degeneration.

    Science.gov (United States)

    Brown, Gary C; Brown, Melissa M; Brown, Heidi C; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers

  14. Computational Modeling of Oxygen Transport in the Microcirculation: From an Experiment-Based Model to Theoretical Analyses

    OpenAIRE

    Lücker, Adrien

    2017-01-01

    Oxygen supply to cells by the cardiovascular system involves multiple physical and chemical processes that aim to satisfy fluctuating metabolic demand. Regulation mechanisms range from increased heart rate to minute adaptations in the microvasculature. The challenges and limitations of experimental studies in vivo make computational models an invaluable complement. In this thesis, oxygen transport from capillaries to tissue is investigated using a new numerical model that is tailored for vali...

  15. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. THE EUROPEAN PUBLIC PROSECUTOR’S OFFICE: A theoretical analyse of the proposed regulation for establishment of a European Public Prosecutor’s Office

    Directory of Open Access Journals (Sweden)

    Oana – Măriuca PETRESCU

    2014-05-01

    Full Text Available Establishing a new European body, namely, European Public Prosecutor’s Office, could be described as one of the sensitive issues currently under discussion within the European Union, especially in the context of the serious economic crisis that hit in the last 5 years both the European Union and the national economies, as well as their difficult recovery. The author’s intention is to describe the events proceedings the discussions to regulate the possibility to establish this concept in the Treaty of Lisbon, including a theoretical analysis of the regulation proposed by the European Commission in accordance with Article 86 TFEU to establish this new body of the European Union having as main goal to investigate, prosecute and bring to justice those who commit criminal offences affecting the Union’s financial interests. The author’s conviction is that if agreed to be establish, this new body could change Europe’s judicial landscape, from various reasons. One of these reasons is that this new European Public Prosecutor’s Office will constitute a network of around 100 prosecutors in charge to investigate and prosecute suspects for defrauding EU funding programs.

  17. Theoretical analysis of transcranial Hall-effect stimulation based on passive cable model

    International Nuclear Information System (INIS)

    Yuan Yi; Li Xiao-Li

    2015-01-01

    Transcranial Hall-effect stimulation (THS) is a new stimulation method in which an ultrasonic wave in a static magnetic field generates an electric field in an area of interest such as in the brain to modulate neuronal activities. However, the biophysical basis of simulating the neurons remains unknown. To address this problem, we perform a theoretical analysis based on a passive cable model to investigate the THS mechanism of neurons. Nerve tissues are conductive; an ultrasonic wave can move ions embedded in the tissue in a static magnetic field to generate an electric field (due to Lorentz force). In this study, a simulation model for an ultrasonically induced electric field in a static magnetic field is derived. Then, based on the passive cable model, the analytical solution for the voltage distribution in a nerve tissue is determined. The simulation results showthat THS can generate a voltage to stimulate neurons. Because the THS method possesses a higher spatial resolution and a deeper penetration depth, it shows promise as a tool for treating or rehabilitating neuropsychiatric disorders. (paper)

  18. Tetraphenylpyrimidine-Based AIEgens: Facile Preparation, Theoretical Investigation and Practical Application

    Directory of Open Access Journals (Sweden)

    Junkai Liu

    2017-10-01

    Full Text Available Aggregation-induced emission (AIE has become a hot research area and tremendous amounts of AIE-active luminogens (AIEgens have been generated. To further promote the development of AIE, new AIEgens are highly desirable. Herein, new AIEgens based on tetraphenylpyrimidine (TPPM are rationally designed according to the AIE mechanism of restriction of intramolecular motion, and facilely prepared under mild reaction conditions. The photophysical property of the generated TPPM, TPPM-4M and TPPM-4P are systematically investigated and the results show that they feature the aggregation-enhanced emission (AEE characteristics. Theoretical study shows the high-frequency bending vibrations in the central pyrimidine ring of TPPM derivatives dominate the nonradiative decay channels. Thanks to the AEE feature, their aggregates can be used to detect explosives with super-amplification quenching effects, and the sensing ability is higher than typical AIE-active tetraphenylethene. It is anticipated that TPPM derivatives could serve as a new type of widely used AIEgen based on their facile preparation and good thermo-, photo- and chemostabilities.

  19. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  20. A Versatile Software Package for Inter-subject Correlation Based Analyses of fMRI

    Directory of Open Access Journals (Sweden)

    Jukka-Pekka eKauppi

    2014-01-01

    Full Text Available In the inter-subject correlation (ISC based analysis of the functional magnetic resonance imaging (fMRI data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modelling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine or Open Grid Scheduler and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/.

  1. A versatile software package for inter-subject correlation based analyses of fMRI.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi

    2014-01-01

    In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/

  2. Analysing task design and students' responses to context-based problems through different analytical frameworks

    Science.gov (United States)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  3. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  4. Comprehensive Characterization of Palygorskite from Torrejon el Rubio (Spain) Based on Experimental Techniques and Theoretical DFT Studies

    International Nuclear Information System (INIS)

    Fernandez, A.M.; Timon, V.; Cubero, J. J.; Sanchez-Ledesma, D. M.; Gutierrez-Nebot, L.; Martinez, J. J.; Romero, C.; Labajo, M.; Melon, A.; Barrios, I.

    2013-01-01

    New data about the physico-chemical, microstructural and crystal-chemical properties of the mineral paligorskite from Torrejon el Rubio (Caceres, Spain) were obtained by a combination of experimental techniques (XRD, FRX, FTIR, TG-DSC, SEM and chemical analyses), as well as geometry optimization by means of the Density Functional Theory (DFT). This study demonstrates the applicability of the mixed theoretical-experimental work to characterize and understand the properties of clay minerals used in technological applications and environmental remediation. (Author)

  5. Comprehensive Characterization of Palygorskite from Torrejon el Rubio (Spain) Based on Experimental Techniques and Theoretical DFT Studies

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, A.M.; Timon, V.; Cubero, J. J.; Sanchez-Ledesma, D. M.; Gutierrez-Nebot, L.; Martinez, J. J.; Romero, C.; Labajo, M.; Melon, A.; Barrios, I.

    2013-10-01

    New data about the physico-chemical, microstructural and crystal-chemical properties of the mineral paligorskite from Torrejon el Rubio (Caceres, Spain) were obtained by a combination of experimental techniques (XRD, FRX, FTIR, TG-DSC, SEM and chemical analyses), as well as geometry optimization by means of the Density Functional Theory (DFT). This study demonstrates the applicability of the mixed theoretical-experimental work to characterize and understand the properties of clay minerals used in technological applications and environmental remediation. (Author)

  6. Estimation of effective block conductivities based on discrete network analyses using data from the Aespoe site

    International Nuclear Information System (INIS)

    La Pointe, P.R.; Wallmann, P.; Follin, S.

    1995-09-01

    Numerical continuum codes may be used for assessing the role of regional groundwater flow in far-field safety analyses of a nuclear waste repository at depth. The focus of this project is to develop and evaluate one method based on Discrete Fracture Network (DFN) models to estimate block-scale permeability values for continuum codes. Data from the Aespoe HRL and surrounding area are used. 57 refs, 76 figs, 15 tabs

  7. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    International Nuclear Information System (INIS)

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-01-01

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  8. Effectiveness of a theoretically-based judgment and decision making intervention for adolescents.

    Science.gov (United States)

    Knight, Danica K; Dansereau, Donald F; Becan, Jennifer E; Rowan, Grace A; Flynn, Patrick M

    2015-05-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37 % female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one's own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors.

  9. Theoretical study of the electronic structure of f-element complexes by quantum chemical methods; Analyse de la structure electronique des complexes contenant des elements F par des methodes de la chimie quantique

    Energy Technology Data Exchange (ETDEWEB)

    Vetere, V

    2002-09-15

    This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X{sub 3}M-L species (X=F, Cl; M=La, Nd, U; L = NH{sub 3}, acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)

  10. Theoretical justification of space-mapping-based modeling utilizing a database and on-demand parameter extraction

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    the surrogate, we perform parameter extraction with weighting coefficients dependent on the distance between the point of interest and base points. We provide theoretical results showing that the new methodology can assure any accuracy that is required (provided the base set is dense enough), which...

  11. Theoretical Conversions of Different Hardness and Tensile Strength for Ductile Materials Based on Stress-Strain Curves

    Science.gov (United States)

    Chen, Hui; Cai, Li-Xun

    2018-04-01

    Based on the power-law stress-strain relation and equivalent energy principle, theoretical equations for converting between Brinell hardness (HB), Rockwell hardness (HR), and Vickers hardness (HV) were established. Combining the pre-existing relation between the tensile strength ( σ b ) and Hollomon parameters ( K, N), theoretical conversions between hardness (HB/HR/HV) and tensile strength ( σ b ) were obtained as well. In addition, to confirm the pre-existing σ b -( K, N) relation, a large number of uniaxial tensile tests were conducted in various ductile materials. Finally, to verify the theoretical conversions, plenty of statistical data listed in ASTM and ISO standards were adopted to test the robustness of the converting equations with various hardness and tensile strength. The results show that both hardness conversions and hardness-strength conversions calculated from the theoretical equations accord well with the standard data.

  12. Grand Canonical adaptive resolution simulation for molecules with electrons: A theoretical framework based on physical consistency

    Science.gov (United States)

    Delle Site, Luigi

    2018-01-01

    A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.

  13. Experimental and theoretical investigation of vibrational spectra of coordination polymers based on TCE-TTF.

    Science.gov (United States)

    Olejniczak, Iwona; Lapiński, Andrzej; Swietlik, Roman; Olivier, Jean; Golhen, Stéphane; Ouahab, Lahcène

    2011-08-01

    The room-temperature infrared and Raman spectra of a series of four isostructural polymeric salts of 2,3,6,7-tetrakis(2-cyanoethylthio)-tetrathiafulvalene (TCE-TTF) with paramagnetic (Co(II), Mn(II)) and diamagnetic (Zn(II), Cd(II)) ions, together with BF(4)(-) or ClO(4)(-) anions are reported. Infrared and Raman-active modes are identified and assigned based on theoretical calculations for neutral and ionized TCE-TTF using density functional theory (DFT) methods. It is confirmed that the TCE-TTF molecules in all the materials investigated are fully ionized and interact in the crystal structure through cyanoethylthio groups. The vibrational modes related to the C=C stretching vibrations of TCE-TTF are analyzed assuming the occurrence of electron-molecular vibration coupling (EMV). The presence of the antisymmetric C=C dimeric mode provides evidence that charge transfer takes place between TCE-TTF molecules belonging to neighboring polymeric networks. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Triphenylamine-based fluorescent NLO phores with ICT characteristics: Solvatochromic and theoretical study

    Science.gov (United States)

    Katariya, Santosh B.; Patil, Dinesh; Rhyman, Lydia; Alswaidan, Ibrahim A.; Ramasami, Ponnadurai; Sekar, Nagaiyan

    2017-12-01

    The static first and second hyperpolarizability and their related properties were calculated for triphenylamine-based "push-pull" dyes using the B3LYP, CAM-B3LYP and BHHLYP functionals in conjunction with the 6-311+G(d,p) basis set. The electronic coupling for the electron transfer reaction of the dyes were calculated with the generalized Mulliken-Hush method. The results obtained were correlated with the polarizability parameter αCT , first hyperpolarizability parameter βCT, and the solvatochromic descriptor of 〈 γ〉 SD obtained by the solvatochromic method. The dyes studied show a high total first order hyperpolarizability (70-238 times) and second order hyperpolarizability (412-778 times) compared to urea. Among the three functionals, the CAM-B3LYP and BHHLYP functionals show hyperpolarizability values closer to experimental values. Experimental absorption and emission wavelengths measured for all the synthesized dyes are in good agreement with those predicted using the time-dependent density functional theory. The theoretical examination on non-linear optical properties was performed on the key parameters of polarizability and hyperpolarizability. A remarkable increase in non-linear optical response is observed on insertion of benzothiazole unit compared to benzimidazole unit.

  15. Margins of freedom: a field-theoretic approach to class-based health dispositions and practices.

    Science.gov (United States)

    Burnett, Patrick John; Veenstra, Gerry

    2017-09-01

    Pierre Bourdieu's theory of practice situates social practices in the relational interplay between experiential mental phenomena (habitus), resources (capitals) and objective social structures (fields). When applied to class-based practices in particular, the overarching field of power within which social classes are potentially made manifest is the primary field of interest. Applying relational statistical techniques to original survey data from Toronto and Vancouver, Canada, we investigated whether smoking, engaging in physical activity and consuming fruit and vegetables are dispersed in a three-dimensional field of power shaped by economic and cultural capitals and cultural dispositions and practices. We find that aesthetic dispositions and flexibility of developing and established dispositions are associated with positioning in the Canadian field of power and embedded in the logics of the health practices dispersed in the field. From this field-theoretic perspective, behavioural change requires the disruption of existing relations of harmony between the habitus of agents, the fields within which the practices are enacted and the capitals that inform and enforce the mores and regularities of the fields. The three-dimensional model can be explored at: http://relational-health.ca/margins-freedom. © 2017 Foundation for the Sociology of Health & Illness.

  16. Sequential game-theoretical analysis of safeguards systems based on the principle of material accountability

    International Nuclear Information System (INIS)

    Abel, V.; Avenhaus, R.

    1981-01-01

    The international control of fissile material used in nuclear technology is based on the principle of material accountability, i. e. on the difference between book inventory and physical inventory of a plant at the end of an inventory period. Since statistical measurement errors cannot be avoided, this comparison calls for methods of statistical hypotheses testing. Moreover, game-theoretical methods are needed as an analytical tool, since if the plant operator wilfully diverts material he will do so in an optimal manner. In this article, the optimal test strategy is determined for the case of two inventory periods. Two assumptions are made: The operation of the plant is stopped after the first inventory period if the test indicates wilful diversion, and the plant operator chooses the probability of wilfully diverting material for both inventory periods before the start of the first period. The assertions which depend on the payoff parameters and the substitutes which must be given in the case that the payoff parameters cannot be estimated are discussed. (orig.) [de

  17. Power Transmission Scheduling for Generators in a Deregulated Environment Based on a Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-12-01

    Full Text Available In a deregulated environment of the power market, in order to lower their energy price and guarantee the stability of the power network, appropriate transmission lines have to be considered for electricity generators to sell their energy to the end users. This paper proposes a game-theoretic power transmission scheduling for multiple generators to lower their wheeling cost. Based on the embedded cost method, a wheeling cost model consisting of congestion cost, cost of losses and cost of transmission capacity is presented. By assuming each generator behaves in a selfish and rational way, the competition among the multiple generators is formulated as a non-cooperative game, where the players are the generators and the strategies are their daily schedules of power transmission. We will prove that there exists at least one pure-strategy Nash equilibrium of the formulated power transmission game. Moreover, a distributed algorithm will be provided to realize the optimization in terms of minimizing the wheeling cost. Finally, simulations were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game approach for the generators in a deregulated environment.

  18. A Game Theoretic Framework for Incentive-Based Models of Intrinsic Motivation in Artificial Systems

    Directory of Open Access Journals (Sweden)

    Kathryn Elizabeth Merrick

    2013-10-01

    Full Text Available An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players’ optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.

  19. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    Science.gov (United States)

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.

  20. Physical characterization of biomass-based pyrolysis liquids. Application of standard fuel oil analyses

    Energy Technology Data Exchange (ETDEWEB)

    Oasmaa, A; Leppaemaeki, E; Koponen, P; Levander, J; Tapola, E [VTT Energy, Espoo (Finland). Energy Production Technologies

    1998-12-31

    The main purpose of the study was to test the applicability of standard fuel oil methods developed for petroleum-based fuels to pyrolysis liquids. In addition, research on sampling, homogeneity, stability, miscibility and corrosivity was carried out. The standard methods have been tested for several different pyrolysis liquids. Recommendations on sampling, sample size and small modifications of standard methods are presented. In general, most of the methods can be used as such but the accuracy of the analysis can be improved by minor modifications. Fuel oil analyses not suitable for pyrolysis liquids have been identified. Homogeneity of the liquids is the most critical factor in accurate analysis. The presence of air bubbles may disturb in several analyses. Sample preheating and prefiltration should be avoided when possible. The former may cause changes in the composition and structure of the pyrolysis liquid. The latter may remove part of organic material with particles. The size of the sample should be determined on the basis of the homogeneity and the water content of the liquid. The basic analyses of the Technical Research Centre of Finland (VTT) include water, pH, solids, ash, Conradson carbon residue, heating value, CHN, density, viscosity, pourpoint, flash point, and stability. Additional analyses are carried out when needed. (orig.) 53 refs.

  1. A fluorescent sensor based on dansyl-diethylenetriamine-thiourea conjugate: a through theoretical investigation

    International Nuclear Information System (INIS)

    Nguyen Khoa Hien; Nguyen Thi Ai Nhung; Duong Tuan Quang; Ho Quoc Dai; Nguyen Tien Trung

    2015-01-01

    A new dansyl-diethylenetriamine-thiourea conjugate (DT) for detection of Hg 2+ ions in aqueous solution has been theoretically designed and compared to our previously published results. The synthetic path, the optimized geometric structure and the characteristics of the DT were found by the theoretical calculations at the B3LYP/LanL2DZ level. Accordingly, the DT can react with Hg 2+ ion to form a product with quenched fluorescence. It is remarkable that the experimental results are in an excellent agreement with the theoretically evaluated data. (author)

  2. Theoretical study of some aspects of the nucleo-bases reactivity: definition of new theoretical tools for the study of chemical reactivity

    International Nuclear Information System (INIS)

    Labet, V.

    2009-09-01

    In this work, three kinds of nucleo-base damages were studied from a theoretical point of view with quantum chemistry methods based on the density-functional theory: the spontaneous deamination of cytosine and its derivatives, the formation of tandem lesion induced by hydroxyl radicals in anaerobic medium and the formation of pyrimidic dimers under exposition to an UV radiation. The complementary use of quantitative static methods allowing the exploration of the potential energy surface of a chemical reaction, and of 'conceptual DFT' principles, leads to information concerning the mechanisms involved and to the rationalization of the differences in the nucleo-bases reactivity towards the formation of a same kind of damage. At the same time, a reflexion was undertaken on the asynchronous concerted mechanism concept, in terms of physical meaning of the transition state, respect of the Maximum Hardness Principle, and determination of the number of primitive processes involved. Finally, a new local reactivity index was developed, relevant to understand the reactivity of a molecular system in an excited state. (author)

  3. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  4. An ecological and theoretical deconstruction of a school-based obesity prevention program in Mexico.

    Science.gov (United States)

    Safdie, Margarita; Cargo, Margaret; Richard, Lucie; Lévesque, Lucie

    2014-08-10

    Ecological intervention programs are recommended to prevent overweight and obesity in children. The National Institute of Public Health (INSP) in Mexico implemented a successful ecological intervention program to promote healthy lifestyle behaviors in school age children. This study assessed the integration of ecological principles and Social Cognitive Theory (SCT) constructs in this effective school-based obesity prevention program implemented in 15 elementary schools in Mexico City. Two coders applied the Intervention Analysis Procedure (IAP) to "map" the program's integration of ecological principles. A checklist gauged the use of SCT theory in program activities. Thirty-two distinct intervention strategies were implemented in one setting (i.e., school) to engage four different target-groups (students, parents, school representatives, government) across two domains (Nutrition and Physical Activity). Overall, 47.5% of the strategies targeted the school infrastructure and/or personnel; 37.5% of strategies targeted a key political actor, the Public Education Secretariat while fewer strategies targeted parents (12.5%) and children (3%). More strategies were implemented in the Nutrition domain (69%) than Physical Activity (31%). The most frequently used SCT construct within both intervention domains was Reciprocal Determinism (e.g., where changes to the environment influence changes in behavior and these behavioral changes influence further changes to the environment); no significant differences were observed in the use of SCT constructs across domains. Findings provide insight into a promising combination of strategies and theoretical constructs that can be used to implement a school-based obesity prevention program. Strategies emphasized school-level infrastructure/personnel change and strong political engagement and were most commonly underpinned by Reciprocal Determinism for both Nutrition and Physical Activity.

  5. Theoretical Issues

    Energy Technology Data Exchange (ETDEWEB)

    Marc Vanderhaeghen

    2007-04-01

    The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.

  6. Comparative study of a novel liquid–vapour separator incorporated gravitational loop heat pipe against the conventional gravitational straight and loop heat pipes – Part I: Conceptual development and theoretical analyses

    International Nuclear Information System (INIS)

    Zhang, Xingxing; Shen, Jingchun; He, Wei; Xu, Peng; Zhao, Xudong; Tan, Junyi

    2015-01-01

    Highlights: • We proposed a liquid–vapour separator incorporated gravity-assisted loop heat pipe. • Comparative study of the thermal performance of three heat pipes were conducted. • A dedicated steady-state thermal model of three heat pipes were developed. • Optimum operational settings of the new loop heat pipe were recommended. • The new loop heat pipe could achieve a significantly enhanced heat transfer effect. - Abstract: Aim of the paper is to investigate the thermal performance of a novel liquid–vapour separator incorporated gravity-assisted loop heat pipe (GALHP) (T1), against a conventional GALHP (T2) and a gravitational straight heat pipe (T3), from the conceptual and theoretical aspects. This involved a dedicated conceptual formation, thermo-fluid analyses, and computer modelling and results discussion. The innovative feature of the new GALHP lies in the integration of a dedicated liquid–vapour separator on top of its evaporator section, which removes the potential entrainment between the heat pipe liquid and vapour flows and meanwhile, resolves the inherent ‘dry-out’ problem exhibited in the conventional GALHP. Based on this recognised novelty, a dedicated steady-state thermal model covering the mass continuity, energy conservation and Darcy equations was established. The model was operated at different sets of conditions, thus generating the temperature/pressure contours of the vapour and liquid flows at the evaporator section, the overall thermal resistance, the effective thermal conductivity, and the flow resistances across entire loop. Comparison among these results led to determination of the optimum operational settings of the new GALHP and assessment of the heat-transfer enhancement rate of the new GALHP against the conventional heat pipes. It was suggested that the overall thermal resistance of the three heat pipes (T1, T2, and T3) were 0.10 °C/W, 0.49 °C/W and 0.22 °C/W, while their effective thermal conductivities were

  7. FY01 Supplemental Science and Performance Analysis: Volume 1, Scientific Bases and Analyses

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  8. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  9. Issues and approaches in risk-based aging analyses of passive components

    International Nuclear Information System (INIS)

    Uryasev, S.P.; Samanta, P.K.; Vesely, W.E.

    1994-01-01

    In previous NRC-sponsored work a general methodology was developed to quantify the risk contributions from aging components at nuclear plants. The methodology allowed Probabilistic Risk Analyses (PRAs) to be modified to incorporate the age-dependent component failure rates and also aging maintenance models to evaluate and prioritize the aging contributions from active components using the linear aging failure rate model and empirical components aging rates. In the present paper, this methodology is extended to passive components (for example, the pipes, heat exchangers, and the vessel). The analyses of passive components bring in issues different from active components. Here, we specifically focus on three aspects that need to be addressed in risk-based aging prioritization of passive components

  10. DESIGNING EAP MATERIALS BASED ON INTERCULTURAL CORPUS ANALYSES: THE CASE OF LOGICAL MARKERS IN RESEARCH ARTICLES

    Directory of Open Access Journals (Sweden)

    Pilar Mur Dueñas

    2009-10-01

    Full Text Available The ultimate aim of intercultural analyses in English for Academic Purposes is to help non-native scholars function successfully in the international disciplinary community in English. The aim of this paper is to show how corpus-based intercultural analyses can be useful to design EAP materials on a particular metadiscourse category, logical markers, in research article writing. The paper first describes the analysis carried out of additive, contrastive and consecutive logical markers in a corpus of research articles in English and in Spanish in a particular discipline, Business Management. Differences were found in their frequency and also in the use of each of the sub-categories. Then, five activities designed on the basis of these results are presented. They are aimed at raising Spanish Business scholars' awareness of the specific uses and pragmatic function of frequent logical markers in international research articles in English.

  11. Integrated approach for fusion multi-physics coupled analyses based on hybrid CAD and mesh geometries

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, Yuefeng, E-mail: yuefeng.qiu@kit.edu; Lu, Lei; Fischer, Ulrich

    2015-10-15

    Highlights: • Integrated approach for neutronics, thermal and structural analyses was developed. • MCNP5/6, TRIPOLI-4 were coupled with CFX, Fluent and ANSYS Workbench. • A novel meshing approach has been proposed for describing MC geometry. - Abstract: Coupled multi-physics analyses on fusion reactor devices require high-fidelity neutronic models, and flexible, accurate data exchanging between various calculation codes. An integrated coupling approach has been developed to enable the conversion of CAD, mesh, or hybrid geometries for Monte Carlo (MC) codes MCNP5/6, TRIPOLI-4, and translation of nuclear heating data for CFD codes Fluent, CFX and structural mechanical software ANSYS Workbench. The coupling approach has been implemented based on SALOME platform with CAD modeling, mesh generation and data visualization capabilities. A novel meshing approach has been developed for generating suitable meshes for MC geometry descriptions. The coupling approach has been concluded to be reliable and efficient after verification calculations of several application cases.

  12. Tracing common origins of Genomic Islands in prokaryotes based on genome signature analyses.

    Science.gov (United States)

    van Passel, Mark Wj

    2011-09-01

    Horizontal gene transfer constitutes a powerful and innovative force in evolution, but often little is known about the actual origins of transferred genes. Sequence alignments are generally of limited use in tracking the original donor, since still only a small fraction of the total genetic diversity is thought to be uncovered. Alternatively, approaches based on similarities in the genome specific relative oligonucleotide frequencies do not require alignments. Even though the exact origins of horizontally transferred genes may still not be established using these compositional analyses, it does suggest that compositionally very similar regions are likely to have had a common origin. These analyses have shown that up to a third of large acquired gene clusters that reside in the same genome are compositionally very similar, indicative of a shared origin. This brings us closer to uncovering the original donors of horizontally transferred genes, and could help in elucidating possible regulatory interactions between previously unlinked sequences.

  13. Theoretical investigation of the energy performance of a novel MPCM (Microencapsulated Phase Change Material) slurry based PV/T module

    International Nuclear Information System (INIS)

    Qiu, Zhongzhu; Zhao, Xudong; Li, Peng; Zhang, Xingxing; Ali, Samira; Tan, Junyi

    2015-01-01

    Aim of the paper is to present a theoretical investigation into the energy performance of a novel PV/T module that employs the MPCM (Micro-encapsulated Phase Change Material) slurry as the working fluid. This involved (1) development of a dedicated mathematical model and computer program; (2) validation of the model by using the published data; (3) prediction of the energy performance of the MPCM (Microencapsulated Phase Change Material) slurry based PV/T module; and (4) investigation of the impacts of the slurry flow state, concentration ratio, Reynolds number and slurry serpentine size onto the energy performance of the PV/T module. It was found that the established model, based on the Hottel–Whillier assumption, is able to predict the energy performance of the MPCM slurry based PV/T system at a very good accuracy, with 0.3–0.4% difference compared to a validated model. Analyses of the simulation results indicated that laminar flow is not a favorite flow state in terms of the energy efficiency of the PV/T module. Instead, turbulent flow is a desired flow state that has potential to enhance the energy performance of PV/T module. Under the turbulent flow condition, increasing the slurry concentration ratio led to the reduced PV cells' temperature and increased thermal, electrical and overall efficiency of the PV/T module, as well as increased flow resistance. As a result, the net efficiency of the PV/T module reached the peak level at the concentration ratio of 5% at a specified Reynolds number of 3,350. Remaining all other parameters fixed, increasing the diameter of the serpentine piping led to the increased slurry mass flow rate, decreased PV cells' temperature and consequently, increased thermal, electrical, overall and net efficiencies of the PV/T module. In overall, the MPCM slurry based PV/T module is a new, highly efficient solar thermal and power configuration, which has potential to help reduce fossil fuel consumption and carbon emission to

  14. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses.

    Science.gov (United States)

    Syrowatka, Ania; Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-26

    Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than

  15. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses

    Science.gov (United States)

    Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-01

    Background Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. Objective The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Methods Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Results Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however

  16. A web-based endpoint adjudication system for interim analyses in clinical trials.

    Science.gov (United States)

    Nolen, Tracy L; Dimmick, Bill F; Ostrosky-Zeichner, Luis; Kendrick, Amy S; Sable, Carole; Ngai, Angela; Wallace, Dennis

    2009-02-01

    A data monitoring committee (DMC) is often employed to assess trial progress and review safety data and efficacy endpoints throughout a trail. Interim analyses performed for the DMC should use data that are as complete and verified as possible. Such analyses are complicated when data verification involves subjective study endpoints or requires clinical expertise to determine each subject's status with respect to the study endpoint. Therefore, procedures are needed to obtain adjudicated data for interim analyses in an efficient manner. In the past, methods for handling such data included using locally reported results as surrogate endpoints, adjusting analysis methods for unadjudicated data, or simply performing the adjudication as rapidly as possible. These methods all have inadequacies that make their sole usage suboptimal. For a study of prophylaxis for invasive candidiasis, adjudication of both study eligibility criteria and clinical endpoints prior to two interim analyses was required. Because the study was expected to enroll at a moderate rate and the sponsor required adjudicated endpoints to be used for interim analyses, an efficient process for adjudication was required. We created a web-based endpoint adjudication system (WebEAS) that allows for expedited review by the endpoint adjudication committee (EAC). This system automatically identifies when a subject's data are complete, creates a subject profile from the study data, and assigns EAC reviewers. The reviewers use the WebEAS to review the subject profile and submit their completed review form. The WebEAS then compares the reviews, assigns an additional review as a tiebreaker if needed, and stores the adjudicated data. The study for which this system was originally built was administratively closed after 10 months with only 38 subjects enrolled. The adjudication process was finalized and the WebEAS system activated prior to study closure. Some website accessibility issues presented initially. However

  17. The theoretical advantage of affinity membrane-based immunoadsorption therapy of hypercholesterolemia

    International Nuclear Information System (INIS)

    Green, P.; Odell, R.; Schindhelm, K.

    1996-01-01

    Full text: Therapy of hypercholesterolemia using immunoadsorption of Low Density Lipoprotein (LDL) to a gel substrate is a current clinical technique (Bosch T., Biomat., Art. Cells and Immob. Biotech, 20: 1165- 1169, 1992). Recently, Affinity Membranes have been proposed as an alternate substrate for immunoadsorption (Brandt S and others, Bio Technology, 6:779-782, 1988). Potentially, the overall rate of adsorption to a membrane may be faster than to a gel because of the different geometry (ibid). This implies that for the same conditions, a membrane-based device will have a higher Number of Transfer Units, more efficient adsorption and a smaller device size than a gel. To test this hypothesis, we calculated two key theoretical design parameters: Separation Factor, R, and the Number of Transfer Units, N, for a functioning clinical-scale affinity membrane device: R=K d /K d +C 0 . Kd: Equilibrium Dissociation Constant (M) and Co: Feed Concentration (M) N=k a Q max V m /F. ka: Intrinsic reaction rate constant (M -1 min -1 ), Qmax: Substrate capacity (M), Vm: Membrane volume (m1) and F: Flow Rate (m1 min -1 ). We assumed 1 hr treatment time during which 1 plasma volume (3L) is treated, hence F=50 (m1 min -1 ). If we assume 2/3 of LDL is removed from an initial level of 3 g/L, we can calculate an average feed concentration Co = 2 g / L. There is some data available in the literature for typical values of Kd (10 -8 M) and ka ( 10 3 M -1 s -1 to 3 x 10 5 M -1 s -1 ) (Olsen WC and others, Molec. Immun: 26: 129-136, 1989). Since the intrinsic reaction kinetics may vary from very slow (10 3 M) to very fast (3 x 10 5 M), the Number of Transfer Units, N may vary from small (2) to large (650). Hence for a membrane device, we must select the antibody with the fastest reaction, ka, and highest capacity (Qmax) otherwise, there may be no advantage in a membrane-based device over a gel-based device

  18. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  19. Cell death following BNCT: A theoretical approach based on Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F., E-mail: francesca.ballarini@pv.infn.it [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bakeine, J. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Bortolussi, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bruschi, P. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Cansolino, L.; Clerici, A.M.; Ferrari, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Protti, N.; Stella, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Zonta, A.; Zonta, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Altieri, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy)

    2011-12-15

    In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called 'lethal aberrations' (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the {sup 10}B(n,{alpha}) {sup 7}Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the {sup 14}N(n,p){sup 14}C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death.

  20. The theoretical base of e-learning and its role in surgical education.

    Science.gov (United States)

    Evgeniou, Evgenios; Loizou, Peter

    2012-01-01

    The advances in Internet and computer technology offer many solutions that can enhance surgical education and increase the effectiveness of surgical teaching. E-learning plays an important role in surgical education today, with many e-learning projects already available on the Internet. E-learning is based on a mixture of educational theories that derive from behaviorist, cognitivist, and constructivist educational theoretical frameworks. CAN EDUCATIONAL THEORY IMPROVE E-LEARNING?: Conventional educational theory can be applied to improve the quality and effectiveness of e-learning. The theory of "threshold concepts" and educational theories on reflection, motivation, and communities of practice can be applied when designing e-learning material. E-LEARNING IN SURGICAL EDUCATION: E-learning has many advantages but also has weaknesses. Studies have shown that e-learning is an effective teaching method that offers high levels of learner satisfaction. Instead of trying to compare e-learning with traditional methods of teaching, it is better to integrate in e-learning elements of traditional teaching that have been proven to be effective. E-learning can play an important role in surgical education as a blended approach, combined with more traditional methods of teaching, which offer better face-to-interaction with patients and colleagues in different circumstances and hands on practice of practical skills. National provision of e-learning can make evaluation easier. The correct utilization of Internet and computer resources combined with the application of valid conventional educational theory to design e-learning relevant to the various levels of surgical training can be effective in the training of future surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  1. CrusView: a Java-based visualization platform for comparative genomics analyses in Brassicaceae species.

    Science.gov (United States)

    Chen, Hao; Wang, Xiangfeng

    2013-09-01

    In plants and animals, chromosomal breakage and fusion events based on conserved syntenic genomic blocks lead to conserved patterns of karyotype evolution among species of the same family. However, karyotype information has not been well utilized in genomic comparison studies. We present CrusView, a Java-based bioinformatic application utilizing Standard Widget Toolkit/Swing graphics libraries and a SQLite database for performing visualized analyses of comparative genomics data in Brassicaceae (crucifer) plants. Compared with similar software and databases, one of the unique features of CrusView is its integration of karyotype information when comparing two genomes. This feature allows users to perform karyotype-based genome assembly and karyotype-assisted genome synteny analyses with preset karyotype patterns of the Brassicaceae genomes. Additionally, CrusView is a local program, which gives its users high flexibility when analyzing unpublished genomes and allows users to upload self-defined genomic information so that they can visually study the associations between genome structural variations and genetic elements, including chromosomal rearrangements, genomic macrosynteny, gene families, high-frequency recombination sites, and tandem and segmental duplications between related species. This tool will greatly facilitate karyotype, chromosome, and genome evolution studies using visualized comparative genomics approaches in Brassicaceae species. CrusView is freely available at http://www.cmbb.arizona.edu/CrusView/.

  2. A Game-based Corpus for Analysing the Interplay between Game Context and Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Asteriadis, Stylianos

    2011-01-01

    present dierent types of information that have been extracted from game context, player preferences and perception of the game, as well as user features, automatically extracted from video recordings.We run a number of initial experiments to analyse players' behavior while playing video games as a case......Recognizing players' aective state while playing video games has been the focus of many recent research studies. In this paper we describe the process that has been followed to build a corpus based on game events and recorded video sessions from human players while playing Super Mario Bros. We...

  3. How distributed processing produces false negatives in voxel-based lesion-deficit analyses.

    Science.gov (United States)

    Gajardo-Vidal, Andrea; Lorca-Puls, Diego L; Crinion, Jennifer T; White, Jitrachote; Seghier, Mohamed L; Leff, Alex P; Hope, Thomas M H; Ludersdorfer, Philipp; Green, David W; Bowman, Howard; Price, Cathy J

    2018-07-01

    In this study, we hypothesized that if the same deficit can be caused by damage to one or another part of a distributed neural system, then voxel-based analyses might miss critical lesion sites because preservation of each site will not be consistently associated with preserved function. The first part of our investigation used voxel-based multiple regression analyses of data from 359 right-handed stroke survivors to identify brain regions where lesion load is associated with picture naming abilities after factoring out variance related to object recognition, semantics and speech articulation so as to focus on deficits arising at the word retrieval level. A highly significant lesion-deficit relationship was identified in left temporal and frontal/premotor regions. Post-hoc analyses showed that damage to either of these sites caused the deficit of interest in less than half the affected patients (76/162 = 47%). After excluding all patients with damage to one or both of the identified regions, our second analysis revealed a new region, in the anterior part of the left putamen, which had not been previously detected because many patients had the deficit of interest after temporal or frontal damage that preserved the left putamen. The results illustrate how (i) false negative results arise when the same deficit can be caused by different lesion sites; (ii) some of the missed effects can be unveiled by adopting an iterative approach that systematically excludes patients with lesions to the areas identified in previous analyses, (iii) statistically significant voxel-based lesion-deficit mappings can be driven by a subset of patients; (iv) focal lesions to the identified regions are needed to determine whether the deficit of interest is the consequence of focal damage or much more extensive damage that includes the identified region; and, finally, (v) univariate voxel-based lesion-deficit mappings cannot, in isolation, be used to predict outcome in other patients

  4. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  5. The Case of Value Based Communication—Epistemological and Methodological Reflections from a System Theoretical Perspective

    Directory of Open Access Journals (Sweden)

    Victoria von Groddeck

    2010-09-01

    Full Text Available The aim of this paper is to reflect the epistemological and methodological aspects of an empirical research study which analyzes the phenomenon of increased value communication within business organizations from a system theoretical perspective in the tradition of Niklas LUHMANN. Drawing on the theoretical term of observation it shows how a research perspective can be developed which opens up the scope for an empirical analysis of communication practices. This analysis focuses on the reconstruction of these practices by first understanding how these practices stabilize themselves and second by contrasting different practices to educe an understanding of different forms of observation of the relevant phenomenon and of the functions of these forms. Thus, this approach combines system theoretical epistemology, analytical research strategies, such as form and functional analysis, and qualitative research methods, such as narrative interviews, participant observation and document analysis. URN: urn:nbn:de:0114-fqs1003177

  6. Randomized Trial of ConquerFear: A Novel, Theoretically Based Psychosocial Intervention for Fear of Cancer Recurrence

    NARCIS (Netherlands)

    Butow, P.N.; Turner, J.; Gilchrist, J.; Sharpe, L.; Smith, A.B.; Fardell, J.E.; Tesson, S.; O'Connell, R.; Girgis, A.; Gebski, V.J.; Asher, R.; Mihalopoulos, C.; Bell, M.L.; Zola, K.G.; Beith, J.; Thewes, B.

    2017-01-01

    Purpose Fear of cancer recurrence (FCR) is prevalent, distressing, and long lasting. This study evaluated the impact of a theoretically/empirically based intervention (ConquerFear) on FCR. Methods Eligible survivors had curable breast or colorectal cancer or melanoma, had completed treatment (not

  7. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    Science.gov (United States)

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  8. Caprylate Salts Based on Amines as Volatile Corrosion Inhibitors for Metallic Zinc: Theoretical and Experimental Studies

    Science.gov (United States)

    Valente, Marco A. G.; Teixeira, Deiver A.; Azevedo, David L.; Feliciano, Gustavo T.; Benedetti, Assis V.; Fugivara, Cecílio S.

    2017-01-01

    The interaction of volatile corrosion inhibitors (VCI), caprylate salt derivatives from amines, with zinc metallic surfaces is assessed by density functional theory (DFT) computer simulations, electrochemical impedance (EIS) measurements and humid chamber tests. The results obtained by the different methods were compared, and linear correlations were obtained between theoretical and experimental data. The correlations between experimental and theoretical results showed that the molecular size is the determining factor in the inhibition efficiency. The models used and experimental results indicated that dicyclohexylamine caprylate is the most efficient inhibitor. PMID:28620602

  9. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    Science.gov (United States)

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on

  10. Genome-based comparative analyses of Antarctic and temperate species of Paenibacillus.

    Directory of Open Access Journals (Sweden)

    Melissa Dsouza

    Full Text Available Antarctic soils represent a unique environment characterised by extremes of temperature, salinity, elevated UV radiation, low nutrient and low water content. Despite the harshness of this environment, members of 15 bacterial phyla have been identified in soils of the Ross Sea Region (RSR. However, the survival mechanisms and ecological roles of these phyla are largely unknown. The aim of this study was to investigate whether strains of Paenibacillus darwinianus owe their resilience to substantial genomic changes. For this, genome-based comparative analyses were performed on three P. darwinianus strains, isolated from gamma-irradiated RSR soils, together with nine temperate, soil-dwelling Paenibacillus spp. The genome of each strain was sequenced to over 1,000-fold coverage, then assembled into contigs totalling approximately 3 Mbp per genome. Based on the occurrence of essential, single-copy genes, genome completeness was estimated at approximately 88%. Genome analysis revealed between 3,043-3,091 protein-coding sequences (CDSs, primarily associated with two-component systems, sigma factors, transporters, sporulation and genes induced by cold-shock, oxidative and osmotic stresses. These comparative analyses provide an insight into the metabolic potential of P. darwinianus, revealing potential adaptive mechanisms for survival in Antarctic soils. However, a large proportion of these mechanisms were also identified in temperate Paenibacillus spp., suggesting that these mechanisms are beneficial for growth and survival in a range of soil environments. These analyses have also revealed that the P. darwinianus genomes contain significantly fewer CDSs and have a lower paralogous content. Notwithstanding the incompleteness of the assemblies, the large differences in genome sizes, determined by the number of genes in paralogous clusters and the CDS content, are indicative of genome content scaling. Finally, these sequences are a resource for further

  11. Theoretical Mathematics

    Science.gov (United States)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  12. MULTI-DIMENSIONAL MASS SPECTROMETRY-BASED SHOTGUN LIPIDOMICS AND NOVEL STRATEGIES FOR LIPIDOMIC ANALYSES

    Science.gov (United States)

    Han, Xianlin; Yang, Kui; Gross, Richard W.

    2011-01-01

    Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525

  13. An Integrated Software Suite for Surface-based Analyses of Cerebral Cortex

    Science.gov (United States)

    Van Essen, David C.; Drury, Heather A.; Dickson, James; Harwell, John; Hanlon, Donna; Anderson, Charles H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database. PMID:11522765

  14. An integrated software suite for surface-based analyses of cerebral cortex

    Science.gov (United States)

    Van Essen, D. C.; Drury, H. A.; Dickson, J.; Harwell, J.; Hanlon, D.; Anderson, C. H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database.

  15. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    International Nuclear Information System (INIS)

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-01-01

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an ''importance factor'' has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor

  16. Analyses of criticality and reactivity for TRACY experiments based on JENDL-3.3 data library

    International Nuclear Information System (INIS)

    Sono, Hiroki; Miyoshi, Yoshinori; Nakajima, Ken

    2003-01-01

    The parameters on criticality and reactivity employed for computational simulations of the TRACY supercritical experiments were analyzed using a recently revised nuclear data library, JENDL-3.3. The parameters based on the JENDL-3.3 library were compared to those based on two former-used libraries, JENDL-3.2 and ENDF/B-VI. In the analyses computational codes, MVP, MCNP version 4C and TWOTRAN, were used. The following conclusions were obtained from the analyses: (1) The computational biases of the effective neutron multiplication factor attributable to the nuclear data libraries and to the computational codes do not depend the TRACY experimental conditions such as fuel conditions. (2) The fractional discrepancies in the kinetic parameters and coefficients of reactivity are within ∼5% between the three libraries. By comparison between calculations and measurements of the parameters, the JENDL-3.3 library is expected to give closer values to the measurements than the JENDL-3.2 and ENDF/B-VI libraries. (3) While the reactivity worth of transient rods expressed in the $ unit shows ∼5% discrepancy between the three libraries according to their respective β eff values, there is little discrepancy in that expressed in the Δk/k unit. (author)

  17. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  18. Benefits of Exercise Training For Computer-Based Staff: A Meta Analyses

    Directory of Open Access Journals (Sweden)

    Mothna Mohammed

    2017-04-01

    Full Text Available Background: Office workers sit down to work for approximately 8 hours a day and, as a result, many of them do not have enough time for any form of physical exercise. This can lead to musculoskeletal discomforts, especially low back pain and recently, many researchers focused on home/office-based exercise training for prevention/treatment of low back pain among this population. Objective: This Meta analyses paper tried to discuss about the latest suggested exercises for the office workers based on the mechanisms and theories behind low back pain among office workers. Method: In this Meta analyses the author tried to collect relevant papers which were published previously on the subject. Google Scholar, Scopus, and PubMed were used as sources to find the articles. Only articles that were published using the same methodology, including office workers, musculoskeletal discomforts, low back pain, and exercise training keywords, were selected. Studies that failed to report sufficient sample statistics, or lacked a substantial review of past academic scholarship and/or clear methodologies, were excluded. Results: Limited evidence regarding the prevention of, and treatment methods for, musculoskeletal discomfort, especially those in the low back, among office workers, is available. The findings showed that training exercises had a significant effect (p<0.05 on low back pain discomfort scores and decreased pain levels in response to office-based exercise training. Conclusion: Office-based exercise training can affect pain/discomfort scores among office workers through positive effects on flexibility and strength of muscles. As such, it should be suggested to occupational therapists as a practical way for the treatment/prevention of low back pain among office workers.

  19. Structural, photophysical, and theoretical studies of imidazole-based excited-state intramolecular proton transfer molecules

    Science.gov (United States)

    Somasundaram, Sivaraman; Kamaraj, Eswaran; Hwang, Su Jin; Park, Sanghyuk

    2018-02-01

    Imidazole-based excited state intramolecular proton transfer (ESIPT) blue fluorescent molecules, 2-(1-(4-chlorophenyl)-4,5-diphenyl-1H-imidazol-2-yl)phenol (BHPI-Cl) and 2-(1-(4-bromophenyl)-4,5-diphenyl-1H-imidazol-2-yl)phenol (BHPI-Br) were designed and synthesized by Debus-Radziszewski method through a one-pot multicomponent reaction in high yield. The synthesized compounds were fully characterized by 1H NMR, 13C NMR, FT-IR, FT-Raman, GC-Mass, and elemental analysis. The molecular structures in single crystal lattice were studied by X-ray crystallographic analysis. Because of the intramolecular hydrogen bonding, hydroxyphenyl group is planar to the central imidazole ring, while the other phenyl rings gave distorted conformations to the central heterocyclic ring. BHPI-Cl and BHPI-Br molecules showed intense ESIPT fluorescence at 480 nm, because the two twisted phenyl rings on 4- and 5-positions have reduced intermolecular interaction between adjacent molecules in each crystal through a head-to-tail packing manner. Quantum chemical calculations of energies were carried out by (TD-)DFT using B3LYP/6-31G(d, p) basis set to predict the electronic absorption spectra of the compounds, and they showed good agreement between the computational and the experimental values. The thermal analyses of the synthesized molecules were also carried out by TGA/DSC method.

  20. Theoretical analysis of the switching efficiency of a grating-based laser beam modulator

    International Nuclear Information System (INIS)

    Ramachandran, V.

    1983-03-01

    A theoretical interpretation of the digital beam deflection efficiency of an electro-optic modulator is described. Calculated switching voltages are in good agreement with the experimentally observed values. The computed percentage efficiencies to three successive positions are 57, 48 and 43, respectively. (author)

  1. Review of theoretical calculations of hydrogen storage in carbon-based materials

    Energy Technology Data Exchange (ETDEWEB)

    Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)

    2001-02-01

    In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)

  2. Energy levels and quantum states of [Leu]enkephalin conformations based on theoretical and experimental investigations

    DEFF Research Database (Denmark)

    Abdali, Salim; Jensen, Morten Østergaard; Bohr, Henrik

    2003-01-01

    This paper describes a theoretical and experimental study of [Leu]enkephalin conformations with respect to the quantum estates of the atomic structure of the peptide. Results from vibrational absorption measurements and quantum calculations are used to outline a quantum picture and to assign vibr...

  3. [Theoretical modeling and experimental research on direct compaction characteristics of multi-component pharmaceutical powders based on the Kawakita equation].

    Science.gov (United States)

    Si, Guo-Ning; Chen, Lan; Li, Bao-Guo

    2014-04-01

    Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.

  4. Theoretical study of the noble metals on semiconductor surfaces and Ti-base shape memory alloys

    International Nuclear Information System (INIS)

    Ding, Yungui.

    1994-01-01

    The electronic and structural properties of the (√3 x √3) R30 degrees Ag/Si(111) and (√3 x √3) R30 degrees Au/Si(111) surfaces are investigated using first principles total energy calculations. We have tested almost all experimentally proposed structural models for both surfaces and found the energetically most favorable model for each of them. The lowest energy model structure of the (√3 x √3) R30 degrees Ag/Si(111) surface consists of a top layer of Ag atoms arranged as ''honeycomb-chained-trimers'' lying above a distorted ''missing top layer'' Si(111) substrate. The coverage of Ag is 1 monolayer (ML). We find that the honeycomb structure observed in STM images arise from the electronic charge densities of an empty surface band near the Fermi level. The electronic density of states of this model gives a ''pseudo-gap'' around the Fermi level, which is consistent with experimental results. The lowest energy model for the (√3 x √3) R30 degrees Au/Si(111) surface is a conjugate honeycomb-chained-trimer (CHCT-1) configuration which consists of a top layer of trimers formed by 1 ML Au atoms lying above a ''missing top layer'' Si(111) substrate with a honeycomb-chained-trimer structure for its first layer. The structures of Au and Ag are in fact quite similar and belong to the same class of structural models. However, small variation in the structural details gives rise to quite different observed STM images, as revealed in the theoretical calculations. The electronic charge density from bands around the Fermi level for the (√3 x √3) R30 degrees, Au/Si(111) surface also gives a good description of the images observed in STM experiments. First principles calculations are performed to study the electronic and structural properties of a series of Ti-base binary alloys TiFe, TiNi, TiPd, TiMo, and TiAu in the B2 structure

  5. Reviewing PSA-based analyses to modify technical specifications at nuclear power plants

    International Nuclear Information System (INIS)

    Samanta, P.K.; Martinez-Guridi, G.; Vesely, W.E.

    1995-12-01

    Changes to Technical Specifications (TSs) at nuclear power plants (NPPs) require review and approval by the United States Nuclear Regulatory Commission (USNRC). Currently, many requests for changes to TSs use analyses that are based on a plant's probabilistic safety assessment (PSA). This report presents an approach to reviewing such PSA-based submittals for changes to TSs. We discuss the basic objectives of reviewing a PSA-based submittal to modify NPP TSs; the methodology of reviewing a TS submittal, and the differing roles of a PSA review, a PSA Computer Code review, and a review of a TS submittal. To illustrate this approach, we discuss our review of changes to allowed outage time (AOT) and surveillance test interval (STI) in the TS for the South Texas Project Nuclear Generating Station. Based on this experience gained, a check-list of items is given for future reviewers; it can be used to verify that the submittal contains sufficient information, and also that the review has addressed the relevant issues. Finally, recommended steps in the review process and the expected findings of each step are discussed

  6. Performance Analyses of Renewable and Fuel Power Supply Systems for Different Base Station Sites

    Directory of Open Access Journals (Sweden)

    Josip Lorincz

    2014-11-01

    Full Text Available Base station sites (BSSs powered with renewable energy sources have gained the attention of cellular operators during the last few years. This is because such “green” BSSs impose significant reductions in the operational expenditures (OPEX of telecom operators due to the possibility of on-site renewable energy harvesting. In this paper, the green BSSs power supply system parameters detected through remote and centralized real time sensing are presented. An implemented sensing system based on a wireless sensor network enables reliable collection and post-processing analyses of many parameters, such as: total charging/discharging current of power supply system, battery voltage and temperature, wind speed, etc. As an example, yearly sensing results for three different BSS configurations powered by solar and/or wind energy are discussed in terms of renewable energy supply (RES system performance. In the case of powering those BSS with standalone systems based on a fuel generator, the fuel consumption models expressing interdependence among the generator load and fuel consumption are proposed. This has allowed energy-efficiency comparison of the fuel powered and RES systems, which is presented in terms of the OPEX and carbon dioxide (CO2 reductions. Additionally, approaches based on different BSS air-conditioning systems and the on/off regulation of a daily fuel generator activity are proposed and validated in terms of energy and capital expenditure (CAPEX savings.

  7. Exchange coupling and magnetic anisotropy of exchanged-biased quantum tunnelling single-molecule magnet Ni3Mn2 complexes using theoretical methods based on Density Functional Theory.

    Science.gov (United States)

    Gómez-Coca, Silvia; Ruiz, Eliseo

    2012-03-07

    The magnetic properties of a new family of single-molecule magnet Ni(3)Mn(2) complexes were studied using theoretical methods based on Density Functional Theory (DFT). The first part of this study is devoted to analysing the exchange coupling constants, focusing on the intramolecular as well as the intermolecular interactions. The calculated intramolecular J values were in excellent agreement with the experimental data, which show that all the couplings are ferromagnetic, leading to an S = 7 ground state. The intermolecular interactions were investigated because the two complexes studied do not show tunnelling at zero magnetic field. Usually, this exchange-biased quantum tunnelling is attributed to the presence of intermolecular interactions calculated with the help of theoretical methods. The results indicate the presence of weak intermolecular antiferromagnetic couplings that cannot explain the ferromagnetic value found experimentally for one of the systems. In the second part, the goal is to analyse magnetic anisotropy through the calculation of the zero-field splitting parameters (D and E), using DFT methods including the spin-orbit effect.

  8. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    Science.gov (United States)

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  9. Theoretical Bases of the Model of Interaction of the Government and Local Government Creation

    OpenAIRE

    Nikolay I. Churinov

    2015-01-01

    Article is devoted to questions of understanding of a theoretical component: systems of interaction of bodies of different levels of the government. Author researches historical basis of the studied subject by research of foreign and domestic scientific experience in area of the theory of the state and the law. Much attention is paid to the scientific aspect of the question. By empirical approach interpretation of the theory of interaction of public authorities and local government, and also ...

  10. A THEORETICAL APPROACH TO THE TRANSITION FROM A RESOURCE BASED TO A KNOWLEDGE-ECONOMY

    Directory of Open Access Journals (Sweden)

    Diana GIOACASI

    2015-09-01

    Full Text Available Economic development and the emergence of new technologies have changed the optics on the factors that are generating added value. The transition from a resource-dependent economy to one focused on tangible non-financial factors has progressed in a gradual manner and took place under the influence of globalization and of the internet boom. The aim of this article is to provide a theoretical approach to this phenomenon from the perspective of the temporal evolution of enterprise resources.

  11. Chemometrical characterization of four italian rice varieties based on genetic and chemical analyses.

    Science.gov (United States)

    Brandolini, Vincenzo; Coïsson, Jean Daniel; Tedeschi, Paola; Barile, Daniela; Cereti, Elisabetta; Maietti, Annalisa; Vecchiati, Giorgio; Martelli, Aldo; Arlorio, Marco

    2006-12-27

    This paper describes a method for achieving qualitative identification of four rice varieties from two different Italian regions. To estimate the presence of genetic diversity among the four rice varieties, we used polymerase chain reaction-randomly amplified polymorphic DNA (PCR-RAPD) markers, and to elucidate whether a relationship exists between the ground and the specific characteristics of the product, we studied proximate composition, fatty acid composition, mineral content, and total antioxidant capacity. Using principal component analysis on genomic and compositional data, we were able to classify rice samples according to their variety and their district of production. This work also examined the discrimination ability of different parameters. It was found that genomic data give the best discrimination based on varieties, indicating that RAPD assays could be useful in discriminating among closely related species, while compositional analyses do not depend on the genetic characters only but are related to the production area.

  12. Stress and deflection analyses of floating roofs based on a load-modifying method

    International Nuclear Information System (INIS)

    Sun Xiushan; Liu Yinghua; Wang Jianbin; Cen Zhangzhi

    2008-01-01

    This paper proposes a load-modifying method for the stress and deflection analyses of floating roofs used in cylindrical oil storage tanks. The formulations of loads and deformations are derived according to the equilibrium analysis of floating roofs. Based on these formulations, the load-modifying method is developed to conduct a geometrically nonlinear analysis of floating roofs with the finite element (FE) simulation. In the procedure with the load-modifying method, the analysis is carried out through a series of iterative computations until a convergence is achieved within the error tolerance. Numerical examples are given to demonstrate the validity and reliability of the proposed method, which provides an effective and practical numerical solution to the design and analysis of floating roofs

  13. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    International Nuclear Information System (INIS)

    Cho, Sung Gook; Joe, Yang Hee

    2005-01-01

    By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities

  14. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Gook [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)]. E-mail: sgcho@incheon.ac.kr; Joe, Yang Hee [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)

    2005-08-01

    By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities.

  15. Secondary Data Analyses of Subjective Outcome Evaluation Data Based on Nine Databases

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2012-01-01

    Full Text Available The purpose of this study was to evaluate the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong by analyzing 1,327 school-based program reports submitted by program implementers. In each report, program implementers were invited to write down five conclusions based on an integration of the subjective outcome evaluation data collected from the program participants and program implementers. Secondary data analyses were carried out by aggregating nine databases, with 14,390 meaningful units extracted from 6,618 conclusions. Results showed that most of the conclusions were positive in nature. The findings generally showed that the workers perceived the program and program implementers to be positive, and they also pointed out that the program could promote holistic development of the program participants in societal, familial, interpersonal, and personal aspects. However, difficulties encountered during program implementation (2.15% and recommendations for improvement were also reported (16.26%. In conjunction with the evaluation findings based on other strategies, the present study suggests that the Tier 1 Program of the Project P.A.T.H.S. is beneficial to the holistic development of the program participants.

  16. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  17. Theoretical and Simulations-Based Modeling of Micellization in Linear and Branched Surfactant Systems

    Science.gov (United States)

    Mendenhall, Jonathan D.

    's and other micellization properties for a variety of linear and branched surfactant chemical architectures which are commonly encountered in practice. Single-component surfactant solutions are investigated, in order to clarify the specific contributions of the surfactant head and tail to the free energy of micellization, a quantity which determines the cmc and all other aspects of micellization. First, a molecular-thermodynamic (MT) theory is presented which makes use of bulk-phase thermodynamics and a phenomenological thought process to describe the energetics related to the formation of a micelle from its constituent surfactant monomers. Second, a combined computer-simulation/molecular-thermodynamic (CSMT) framework is discussed which provides a more detailed quantification of the hydrophobic effect using molecular dynamics simulations. A novel computational strategy to identify surfactant head and tail using an iterative dividing surface approach, along with simulated micelle results, is proposed. Force-field development for novel surfactant structures is also discussed. Third, a statistical-thermodynamic, single-chain, mean-field theory for linear and branched tail packing is formulated, which enables quantification of the specific energetic penalties related to confinement and constraint of surfactant tails within micelles. Finally, these theoretical and simulations-based strategies are used to predict the micellization behavior of 55 linear surfactants and 28 branched surfactants. Critical micelle concentration and optimal micelle properties are reported and compared with experiment, demonstrating good agreement across a range of surfactant head and tail types. In particular, the CSMT framework is found to provide improved agreement with experimental cmc's for the branched surfactants considered. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  18. Validation of a fully autonomous phosphate analyser based on a microfluidic lab-on-a-chip

    DEFF Research Database (Denmark)

    Slater, Conor; Cleary, J.; Lau, K.T.

    2010-01-01

    of long-term operation. This was proven by a bench top calibration of the analyser using standard solutions and also by comparing the analyser's performance to a commercially available phosphate monitor installed at a waste water treatment plant. The output of the microfluidic lab-on-a-chip analyser...

  19. Genome based analyses of six hexacorallian species reject the “naked coral” hypothesis

    KAUST Repository

    Wang, Xin

    2017-09-23

    Scleractinian corals are the foundation species of the coral-reef ecosystem. Their calcium carbonate skeletons form extensive structures that are home to millions of species, making coral reefs one of the most diverse ecosystems of our planet. However, our understanding of how reef-building corals have evolved the ability to calcify and become the ecosystem builders they are today is hampered by uncertain relationships within their subclass Hexacorallia. Corallimorpharians have been proposed to originate from a complex scleractinian ancestor that lost the ability to calcify in response to increasing ocean acidification, suggesting the possibility for corals to lose and gain the ability to calcify in response to increasing ocean acidification. Here we employed a phylogenomic approach using whole-genome data from six hexacorallian species to resolve the evolutionary relationship between reef-building corals and their non-calcifying relatives. Phylogenetic analysis based on 1,421 single-copy orthologs, as well as gene presence/absence and synteny information, converged on the same topologies, showing strong support for scleractinian monophyly and a corallimorpharian sister clade. Our broad phylogenomic approach using sequence-based and sequence-independent analyses provides unambiguous evidence for the monophyly of scleractinian corals and the rejection of corallimorpharians as descendants of a complex coral ancestor.

  20. Genome based analyses of six hexacorallian species reject the “naked coral” hypothesis

    KAUST Repository

    Wang, Xin; Drillon, Gué nola; Ryu, Taewoo; Voolstra, Christian R.; Aranda, Manuel

    2017-01-01

    Scleractinian corals are the foundation species of the coral-reef ecosystem. Their calcium carbonate skeletons form extensive structures that are home to millions of species, making coral reefs one of the most diverse ecosystems of our planet. However, our understanding of how reef-building corals have evolved the ability to calcify and become the ecosystem builders they are today is hampered by uncertain relationships within their subclass Hexacorallia. Corallimorpharians have been proposed to originate from a complex scleractinian ancestor that lost the ability to calcify in response to increasing ocean acidification, suggesting the possibility for corals to lose and gain the ability to calcify in response to increasing ocean acidification. Here we employed a phylogenomic approach using whole-genome data from six hexacorallian species to resolve the evolutionary relationship between reef-building corals and their non-calcifying relatives. Phylogenetic analysis based on 1,421 single-copy orthologs, as well as gene presence/absence and synteny information, converged on the same topologies, showing strong support for scleractinian monophyly and a corallimorpharian sister clade. Our broad phylogenomic approach using sequence-based and sequence-independent analyses provides unambiguous evidence for the monophyly of scleractinian corals and the rejection of corallimorpharians as descendants of a complex coral ancestor.

  1. Quantitative Prediction of Coalbed Gas Content Based on Seismic Multiple-Attribute Analyses

    Directory of Open Access Journals (Sweden)

    Renfang Pan

    2015-09-01

    Full Text Available Accurate prediction of gas planar distribution is crucial to selection and development of new CBM exploration areas. Based on seismic attributes, well logging and testing data we found that seismic absorption attenuation, after eliminating the effects of burial depth, shows an evident correlation with CBM gas content; (positive structure curvature has a negative correlation with gas content; and density has a negative correlation with gas content. It is feasible to use the hydrocarbon index (P*G and pseudo-Poisson ratio attributes for detection of gas enrichment zones. Based on seismic multiple-attribute analyses, a multiple linear regression equation was established between the seismic attributes and gas content at the drilling wells. Application of this equation to the seismic attributes at locations other than the drilling wells yielded a quantitative prediction of planar gas distribution. Prediction calculations were performed for two different models, one using pre-stack inversion and the other one disregarding pre-stack inversion. A comparison of the results indicates that both models predicted a similar trend for gas content distribution, except that the model using pre-stack inversion yielded a prediction result with considerably higher precision than the other model.

  2. Risk-based analyses in support of California hazardous site remediation

    International Nuclear Information System (INIS)

    Ringland, J.T.

    1995-08-01

    The California Environmental Enterprise (CEE) is a joint program of the Department of Energy (DOE), Lawrence Livermore National Laboratory, Lawrence Berkeley Laboratory, and Sandia National Laboratories. Its goal is to make DOE laboratory expertise accessible to hazardous site cleanups in the state This support might involve working directly with parties responsible for individual cleanups or it might involve working with the California Environmental Protection Agency to develop tools that would be applicable across a broad range of sites. As part of its initial year's activities, the CEE supported a review to examine where laboratory risk and risk-based systems analysis capabilities might be most effectively applied. To this end, this study draws the following observations. The labs have a clear role in analyses supporting the demonstration and transfer of laboratory characterization or remediation technologies. The labs may have opportunities in developing broadly applicable analysis tools and computer codes for problems such as site characterization or efficient management of resources. Analysis at individual sites, separate from supporting lab technologies or prototyping general tools, may be appropriate only in limited circumstances. In any of these roles, the labs' capabilities extend beyond health risk assessment to the broader areas of risk management and risk-based systems analysis

  3. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    Science.gov (United States)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  4. Theoretical Bases of the Model of Interaction of the Government and Local Government Creation

    Directory of Open Access Journals (Sweden)

    Nikolay I. Churinov

    2015-09-01

    Full Text Available Article is devoted to questions of understanding of a theoretical component: systems of interaction of bodies of different levels of the government. Author researches historical basis of the studied subject by research of foreign and domestic scientific experience in area of the theory of the state and the law. Much attention is paid to the scientific aspect of the question. By empirical approach interpretation of the theory of interaction of public authorities and local government, and also subjective estimated opinion of the author is given.

  5. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  6. Comprehensive logic based analyses of Toll-like receptor 4 signal transduction pathway.

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar Padwal

    Full Text Available Among the 13 TLRs in the vertebrate systems, only TLR4 utilizes both Myeloid differentiation factor 88 (MyD88 and Toll/Interleukin-1 receptor (TIR-domain-containing adapter interferon-β-inducing Factor (TRIF adaptors to transduce signals triggering host-protective immune responses. Earlier studies on the pathway combined various experimental data in the form of one comprehensive map of TLR signaling. But in the absence of adequate kinetic parameters quantitative mathematical models that reveal emerging systems level properties and dynamic inter-regulation among the kinases/phosphatases of the TLR4 network are not yet available. So, here we used reaction stoichiometry-based and parameter independent logical modeling formalism to build the TLR4 signaling network model that captured the feedback regulations, interdependencies between signaling kinases and phosphatases and the outcome of simulated infections. The analyses of the TLR4 signaling network revealed 360 feedback loops, 157 negative and 203 positive; of which, 334 loops had the phosphatase PP1 as an essential component. The network elements' interdependency (positive or negative dependencies in perturbation conditions such as the phosphatase knockout conditions revealed interdependencies between the dual-specific phosphatases MKP-1 and MKP-3 and the kinases in MAPK modules and the role of PP2A in the auto-regulation of Calmodulin kinase-II. Our simulations under the specific kinase or phosphatase gene-deficiency or inhibition conditions corroborated with several previously reported experimental data. The simulations to mimic Yersinia pestis and E. coli infections identified the key perturbation in the network and potential drug targets. Thus, our analyses of TLR4 signaling highlights the role of phosphatases as key regulatory factors in determining the global interdependencies among the network elements; uncovers novel signaling connections; identifies potential drug targets for

  7. Developing theoretically based and culturally appropriate interventions to promote hepatitis B testing in 4 Asian American populations, 2006-2011.

    Science.gov (United States)

    Maxwell, Annette E; Bastani, Roshan; Glenn, Beth A; Taylor, Victoria M; Nguyen, Tung T; Stewart, Susan L; Burke, Nancy J; Chen, Moon S

    2014-05-01

    Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors.

  8. An Electrically Actuated Microbeam-Based MEMS Device: Experimental and Theoretical Investigation

    KAUST Repository

    Ruzziconi, Laura

    2017-11-03

    The present paper deals with the dynamic behavior of a microelectromechanical systems (MEMS). The device consists of a clamped-clamped microbeam electrostatically and electrodynamically actuated. Our objective is to develop a theoretical analysis, which is able to describe and predict all the main relevant aspects of the experimental response. In the first part of the paper an extensive experimental investigation is conducted. The microbeam is perfectly straight. The first three experimental natural frequencies are identified and the nonlinear dynamics are explored at increasing values of electrodynamic excitation. Several backward and forward frequency sweeps are acquired. The nonlinear behavior is highlighted. The experimental data show the coexistence of the nonresonant and the resonant branch, which perform a bending toward higher frequencies values before undergoing jump or pull-in dynamics. This kind of bending is not particularly common in MEMS. In the second part of the paper, a theoretical single degree-of-freedom model is derived. The unknown parameters are extracted and settled via parametric identification. A single mode reduced-order model is considered, which is obtained via the Galerkin technique. To enhance the computational efficiency, the contribution of the electric force term is computed in advance and stored in a table. Extensive numerical simulations are performed at increasing values of electrodynamic excitation. They are observed to properly predict all the main nonlinear features arising in the device response. This occurs not only at low values of electrodynamic excitation, but also at higher ones

  9. Voxel-based morphometry analyses of in-vivo MRI in the aging mouse lemur primate

    Directory of Open Access Journals (Sweden)

    Stephen John Sawiak

    2014-05-01

    Full Text Available Cerebral atrophy is one of the most widely brain alterations associated to aging. A clear relationship has been established between age-associated cognitive impairments and cerebral atrophy. The mouse lemur (Microcebus murinus is a small primate used as a model of age-related neurodegenerative processes. It is the first nonhuman primate in which cerebral atrophy has been correlated with cognitive deficits. Previous studies of cerebral atrophy in this model were based on time consuming manual delineation or measurement of selected brain regions from magnetic resonance images (MRI. These measures could not be used to analyse regions that cannot be easily outlined such as the nucleus basalis of Meynert or the subiculum. In humans, morphometric assessment of structural changes with age is generally performed with automated procedures such as voxel-based morphometry (VBM. The objective of our work was to perform user-independent assessment of age-related morphological changes in the whole brain of large mouse lemur populations thanks to VBM. The study was based on the SPMMouse toolbox of SPM 8 and involved thirty mouse lemurs aged from 1.9 to 11.3 years. The automatic method revealed for the first time atrophy in regions where manual delineation is prohibitive (nucleus basalis of Meynert, subiculum, prepiriform cortex, Brodmann areas 13-16, hypothalamus, putamen, thalamus, corpus callosum. Some of these regions are described as particularly sensitive to age-associated alterations in humans. The method revealed also age-associated atrophy in cortical regions (cingulate, occipital, parietal, nucleus septalis, and the caudate. Manual measures performed in some of these regions were in good agreement with results from automatic measures. The templates generated in this study as well as the toolbox for SPM8 can be downloaded. These tools will be valuable for future evaluation of various treatments that are tested to modulate cerebral aging in lemurs.

  10. Authenticity through VR-based documentation of cultural heritage. A theoretical approach based on conservation and documentation practices

    Directory of Open Access Journals (Sweden)

    Jesús Muñoz Morcillo

    2017-05-01

    Full Text Available The visualization of 3D reconstructed artifacts often requires significant computing resources. The implementation of an object in  a  virtual  reality  (VR  application  even necessitates the  reduction  of  the polygonal  mesh.  Consequently,  the communication and dissemination of “authentic” 3D reconstructions via immersive VR technologies has been a nearly impossible  feat for  many  researchers. However,  is the  issue really  computing resources,  or is it rather  the notion  of authenticity in an “auratic” sense, i.e., an excessive focus on physical evidence and survey data? In the present paper, we will discuss the authenticity requirements for virtual archaeology as set by the Seville Principles(2011, and we will analyse some limitations related to the current approaches. Furthermore, we will propose a pluralistic notion based on the contextualization of 3D objects in VR environments with synesthetic (i.e. multisensory information. This new notion of authenticity  relies  on  conservation meanings rather than  physical  features.  In line  with  this  approach,  two  case studies will be commented: the multimodal 3D-documentation of the Jupiter Column(2AD in Ladenburg, and the VR-based  re-enactment of  a  modern  work  of  art,  the  audio-kinetic  sculpture Kaleidophonic  Dog(1967  by  Stephan  von Huene. These two projects provide valuable data for a revision of the notion of authenticity in both virtual archaeology and art conservation.

  11. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  12. Theoretical approaches to creation of robotic coal mines based on the synthesis of simulation technologies

    Science.gov (United States)

    Fryanov, V. N.; Pavlova, L. D.; Temlyantsev, M. V.

    2017-09-01

    Methodological approaches to theoretical substantiation of the structure and parameters of robotic coal mines are outlined. The results of mathematical and numerical modeling revealed the features of manifestation of geomechanical and gas dynamic processes in the conditions of robotic mines. Technological solutions for the design and manufacture of technical means for robotic mine are adopted using the method of economic and mathematical modeling and in accordance with the current regulatory documents. For a comparative performance evaluation of technological schemes of traditional and robotic mines, methods of cognitive modeling and matrix search for subsystem elements in the synthesis of a complex geotechnological system are applied. It is substantiated that the process of technical re-equipment of a traditional mine with a phased transition to a robotic mine will reduce unit costs by almost 1.5 times with a significant social effect due to a reduction in the number of personnel engaged in hazardous work.

  13. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence

    DEFF Research Database (Denmark)

    Mørcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    and professional attributes as ‘‘competencies’’. OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects...... greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three...... components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins...

  14. Ecology of Subglacial Lake Vostok (Antarctica, Based on Metagenomic/Metatranscriptomic Analyses of Accretion Ice

    Directory of Open Access Journals (Sweden)

    Tom D'Elia

    2013-03-01

    Full Text Available Lake Vostok is the largest of the nearly 400 subglacial Antarctic lakes and has been continuously buried by glacial ice for 15 million years. Extreme cold, heat (from possible hydrothermal activity, pressure (from the overriding glacier and dissolved oxygen (delivered by melting meteoric ice, in addition to limited nutrients and complete darkness, combine to produce one of the most extreme environments on Earth. Metagenomic/metatranscriptomic analyses of ice that accreted over a shallow embayment and over the southern main lake basin indicate the presence of thousands of species of organisms (94% Bacteria, 6% Eukarya, and two Archaea. The predominant bacterial sequences were closest to those from species of Firmicutes, Proteobacteria and Actinobacteria, while the predominant eukaryotic sequences were most similar to those from species of ascomycetous and basidiomycetous Fungi. Based on the sequence data, the lake appears to contain a mixture of autotrophs and heterotrophs capable of performing nitrogen fixation, nitrogen cycling, carbon fixation and nutrient recycling. Sequences closest to those of psychrophiles and thermophiles indicate a cold lake with possible hydrothermal activity. Sequences most similar to those from marine and aquatic species suggest the presence of marine and freshwater regions.

  15. Loss of Flow Accident (LOFA) analyses using LabView-based NRR simulator

    Energy Technology Data Exchange (ETDEWEB)

    Arafa, Amany Abdel Aziz; Saleh, Hassan Ibrahim [Atomic Energy Authority, Cairo (Egypt). Radiation Engineering Dept.; Ashoub, Nagieb [Atomic Energy Authority, Cairo (Egypt). Reactor Physics Dept.

    2016-12-15

    This paper presents a generic Loss of Flow Accident (LOFA) scenario module which is integrated in the LabView-based simulator to imitate a Nuclear Research Reactor (NRR) behavior for different user defined LOFA scenarios. It also provides analyses of a LOFA of a single fuel channel and its impact on operational transactions and on the behavior of the reactor. The generic LOFA scenario module includes graphs needed to clarify the effects of the LOFA under study. Furthermore, the percentage of the loss of mass flow rate, the mode of flow reduction and the start time and transient time of LOFA are user defined to add flexibility to the LOFA scenarios. The objective of integrating such generic LOFA module is to be able to deal with such incidents and avoid their significant effects. It is also useful in the development of expertise in this area and reducing the operator training and simulations costs. The results of the implemented generic LOFA module agree well with that of COBRA-IIIC code and the earlier guidebook for this series of transients.

  16. TAXONOMY AND GENETIC RELATIONSHIPS OF PANGASIIDAE, ASIAN CATFISHES, BASED ON MORPHOLOGICAL AND MOLECULAR ANALYSES

    Directory of Open Access Journals (Sweden)

    Rudhy Gustiano

    2007-12-01

    Full Text Available Pangasiids are economically important riverine catfishes generally residing in freshwater from the Indian subcontinent to the Indonesian Archipelago. The systematics of this family are still poorly known. Consequently, lack of such basic information impedes the understanding of the biology of the Pangasiids and the study of their aquaculture potential as well as improvement of seed production and growth performance. The objectives of the present study are to clarify phylogeny of this family based on a biometric analysis and molecular evidence using 12S ribosomal mtDNA on the total of 1070 specimens. The study revealed that 28 species are recognised as valid in Pangasiidae. Four genera are also recognized as Helicophagus Bleeker 1858, Pangasianodon Chevey 1930, Pteropangasius Fowler 1937, and Pangasius Valenciennes 1840 instead of two as reported by previous workers. The phylogenetic analysis demonstrated the recognised genera, and genetic relationships among taxa. Overall, trees from the different analyses show similar topologies and confirm the hypothesis derived from geological history, palaeontology, and similar models in other taxa of fishes from the same area. The oldest genus may already have existed when the Asian mainland was still connected to the islands in the southern part about 20 million years ago.

  17. Historical Weathering Based on Chemical Analyses of Two Spodosols in Southern Sweden

    International Nuclear Information System (INIS)

    Melkerud, Per-Arne; Bain, Derek C.; Olsson, Mats T.

    2003-01-01

    Chemical weathering losses were calculated for two conifer stands in relation to ongoing studies on liming effects and ash amendments on chemical status, soil solution chemistry and soil genesis. Weathering losses were based on elemental depletion trends in soil profiles since deglaciation and exposure to the weathering environment. Gradients in total geochemical composition were assumed to reflect alteration over time. Study sites were Horroed and Hassloev in southern Sweden. Both Horroed and Hassloev sites are located on sandy loamy Weichselian till at an altitude of 85 and 190 m a.s.l., respectively. Aliquots from volume determined samples from a number of soil levels were fused with lithium metaborate, dissolved in HNO 3 , and analysed by ICP - AES. Results indicated highest cumulative weathering losses at Hassloev. The weathering losses for the elements are in the following order:Si > Al > K > Na > Ca > MgTotal annual losses for Ca+Mg+K+Na, expressed in mmol c m -2 yr -1 , amounted to c. 28 and 58 at Horroed and Hassloev, respectively. Variations between study sites could not be explained by differences in bulk density, geochemistry or mineralogy. The accumulated weathering losses since deglaciation were larger in the uppermost 15 cm than in deeper B horizons for most elements studied

  18. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  19. DERIVATIVE OF SET MEASURE FUNCTIONS AND ITS APPLICATION (THEORETICAL BASES OF INVESTMENT OBJECTIVES

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2014-04-01

    Full Text Available Purpose. It is necessary to develop the theoretical fundamentals for solving the investment objectives presented in the form of set function as vector optimization tasks or tasks of constrained extremum. Methodology. Set functions and their derivatives of measure are used as research of investment objectives. Necessary condition of set function minimum is proved. In the tasks for constrained extremum the method of Lagrange is used. It is shown that this method can also be used for the set function. It is used the measure for proof, which generalizes the Lebesgue measure, and the concept of set sequence limit is introduced. It is noted that the introduced limit over a measure coincides with the classical Borel limit and can be used in order to prove the existence of derivative from set function over a measure on convergent of sets sequence. Findings. An algorithm of solving the investment objective for constrained extremum in relation to investment objectives was offered. Originality. Scientific novelty lies in the fact that in multivariate objects for constrained extremum one can refuse from immediate enumeration. One can use the proposed algorithm of constructing (selection of options that allow building a convex linear envelope of Pareto solutions. This envelope will let the person who makes a decision (DM, select those options that are "better" from a position of DM, and consider some of the criteria, the formalization of which are difficult or can not be described in mathematical terms. Practical value. Results of the study provide the necessary theoretical substantiation of decision-making in investment objectives, when there is a significant number of an investment objects and immediate enumeration of options is very difficult on time costs even for modern computing techniques.

  20. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  1. Analysing the operative experience of basic surgical trainees in Ireland using a web-based logbook

    LENUS (Irish Health Repository)

    Lonergan, Peter E

    2011-09-25

    Abstract Background There is concern about the adequacy of operative exposure in surgical training programmes, in the context of changing work practices. We aimed to quantify the operative exposure of all trainees on the National Basic Surgical Training (BST) programme in Ireland and compare the results with arbitrary training targets. Methods Retrospective analysis of data obtained from a web-based logbook (http:\\/\\/www.elogbook.org) for all general surgery and orthopaedic training posts between July 2007 and June 2009. Results 104 trainees recorded 23,918 operations between two 6-month general surgery posts. The most common general surgery operation performed was simple skin excision with trainees performing an average of 19.7 (± 9.9) over the 2-year training programme. Trainees most frequently assisted with cholecystectomy with an average of 16.0 (± 11.0) per trainee. Comparison of trainee operative experience to arbitrary training targets found that 2-38% of trainees achieved the targets for 9 emergency index operations and 24-90% of trainees achieved the targets for 8 index elective operations. 72 trainees also completed a 6-month post in orthopaedics and recorded 7,551 operations. The most common orthopaedic operation that trainees performed was removal of metal, with an average of 2.90 (± 3.27) per trainee. The most common orthopaedic operation that trainees assisted with was total hip replacement, with an average of 10.46 (± 6.21) per trainee. Conclusions A centralised web-based logbook provides valuable data to analyse training programme performance. Analysis of logbooks raises concerns about operative experience at junior trainee level. The provision of adequate operative exposure for trainees should be a key performance indicator for training programmes.

  2. [Research on fast classification based on LIBS technology and principle component analyses].

    Science.gov (United States)

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  3. DFT theoretical investigations of π-conjugated molecules based on thienopyrazine and different acceptor moieties for organic photovoltaic cells

    Directory of Open Access Journals (Sweden)

    Mohammed Bourass

    2016-09-01

    Full Text Available In this work, theoretical study by using the DFT method on eleven conjugated compounds based on thienopyrazine is reported. Different electron side groups were introduced to investigate their effects on the electronic structure; The HOMO, LUMO and Gap energy of these compounds have been calculated and reported in this paper. A systematic theoretical study of such compound has not been reported as we know. Thus, our aim is first, to explore their electronic and spectroscopic properties on the basis of the DFT quantum chemical calculations. Second, we are interested to elucidate the parameters that influence the photovoltaic efficiency toward better understanding of the structure–property relationships. The study of structural, electronic and optical properties for these compounds could help to design more efficient functional photovoltaic organic materials.

  4. Association of Trans-theoretical Model (TTM based Exercise Behavior Change with Body Image Evaluation among Female Iranian Students

    Directory of Open Access Journals (Sweden)

    Sahar Rostami

    2017-03-01

    Full Text Available BackgroundBody image is a determinant of individual attractiveness and physical activity among the young people. This study was aimed to assess the association of Trans-theoretical model based exercise behavior change with body image evaluation among the female Iranian students.Materials and MethodsThis cross-sectional study was conducted in Sanandaj city, Iran in 2016. Using multistage sampling method, a total of 816 high school female students were included in the study. They completed a three-section questionnaire, including demographic information, Trans-theoretical model constructs and body image evaluation. The obtained data were fed into SPSS version 21.0.  ResultsThe results showed more than 60% of participants were in the pre-contemplation and contemplation stages of exercise behavior. The means of perceived self-efficacy, barriers and benefits were found to have a statistically significant difference during the stages of exercise behavior change (P

  5. Airway management education: simulation based training versus non-simulation based training-A systematic review and meta-analyses.

    Science.gov (United States)

    Sun, Yanxia; Pan, Chuxiong; Li, Tianzuo; Gan, Tong J

    2017-02-01

    Simulation-based training (SBT) has become a standard for medical education. However, the efficacy of simulation based training in airway management education remains unclear. The aim of this study was to evaluate all published evidence comparing the effectiveness of SBT for airway management versus non-simulation based training (NSBT) on learner and patient outcomes. Systematic review with meta-analyses were used. Data were derived from PubMed, EMBASE, CINAHL, Scopus, the Cochrane Controlled Trials Register and Cochrane Database of Systematic Reviews from inception to May 2016. Published comparative trials that evaluated the effect of SBT on airway management training in compared with NSBT were considered. The effect sizes with 95% confidence intervals (CI) were calculated for outcomes measures. Seventeen eligible studies were included. SBT was associated with improved behavior performance [standardized mean difference (SMD):0.30, 95% CI: 0.06 to 0.54] in comparison with NSBT. However, the benefits of SBT were not seen in time-skill (SMD:-0.13, 95% CI: -0.82 to 0.52), written examination score (SMD: 0.39, 95% CI: -0.09 to 0.86) and success rate of procedure completion on patients [relative risk (RR): 1.26, 95% CI: 0.96 to 1.66]. SBT may be not superior to NSBT on airway management training.

  6. Vigas mistas de madeira de reflorestamento e bambu laminado colado: análise teórica e experimental Composite beams of reforestation wood and glue-laminated bamboo: theoretical and experimental analyses

    Directory of Open Access Journals (Sweden)

    Humberto C. Lima Júnior

    2001-12-01

    Full Text Available Neste trabalho, apresenta-se e se discute a análise teórica e experimental de vigas de madeira, reforçadas com bambu. Para tanto, o comportamento mecânico de cinco vigas mistas e duas de madeira foi estudado. A parte interna das vigas mistas foi constituída de Pinus elliottii e o reforço de bambu Dendrocalamus giganteus; este reforço foi disposto tanto na parte inferior como na superior das vigas, além de analisadas diferentes espessuras de reforço. São apresentadas curvas carga vs. deslocamentos e carga vs. deformação. Observou-se aumento da ordem de 50% na rigidez das vigas de Pinus elliottii, quando da presença do reforço de bambu. Finalmente, um estudo numérico em que se utilizou o método dos elementos finitos é apresentado, observando-se boa correlação entre os valores numéricos e experimentais.This paper presents and discusses the experimental and theoretical analyses of layer composite beams of wood strengthened with bamboo. For this, the mechanical behavior of five composite beams and two wood beams was studied. The beam's stuffing was composed of Pinus elliottii and the reinforcement of bamboo Dendrocalamus giganteus. The bamboo reinforcement was placed on the top and at the bottom of the beams. Different thicknesses of reinforcement were investigated. Curves of load vs. displacement and load vs. strain are presented. An increase of 50% was observed in the stiffness, when the composite beams are compared with the wood one. Finally, a numerical study was carried out applying the finite element method and a good agreement was observed between the theoretical and experimental values.

  7. Theoretical orientation and therapists' attitudes to important components of therapy: a study based on the valuable elements in psychotherapy questionnaire.

    Science.gov (United States)

    Larsson, Billy P M; Kaldo, Viktor; Broberg, Anders G

    2010-01-01

    The authors describe the inception and subsequent testing of a questionnaire on attitudes regarding how psychotherapy ought to be pursued: the Valuable Elements in Psychotherapy Questionnaire (VEP-Q). A sample of 416 Swedish therapists (161 psychodynamic, 93 cognitive, 95 cognitive behavioral, and 67 integrative/eclectic) responded to the 17-item VEP-Q. A factor analysis of these items resulted in three subscales: PDT, CBT, and Common Factor, as validated by analyses of covariance. The internal consistency and test-retest reliability of the scales were excellent. In addition to theoretical orientation, variables such as gender and basic professional training influenced how respondents answered the VEP-Q. The authors conclude that the VEP-Q seems to be an appropriate instrument for describing similarities as well as differences among practitioners of various schools of psychotherapy.

  8. Interprofessional collaboration - a matter of differentiation and integration? Theoretical reflections based in the context of Norwegian childcare.

    Science.gov (United States)

    Willumsen, Elisabeth

    2008-08-01

    This paper presents a selection of theoretical approaches illuminating some aspects of interprofessional collaboration, which will be related to theory of contingency as well as to the concepts of differentiation and integration. Theories that describe collaboration on an interpersonal as well as inter-organizational level are outlined and related to dynamic and contextual factors. Implications for the organization of welfare services are elucidated and a categorization of internal and external collaborative forms is proposed. A reflection model is presented in order to analyse the degree of integration in collaborative work and may serve as an analytical tool for addressing the linkage between different levels of collaboration and identifying opportunities and limitations. Some implications related to the legal mandate(s) given to childcare agencies are discussed in relation to the context of childcare in Norway.

  9. Comparison based on energy and exergy analyses of the potential cogeneration efficiencies for fuel cells and other electricity generation devices

    Energy Technology Data Exchange (ETDEWEB)

    Rosen, M A [Ryerson Polytechnical Inst., Toronto, (CA). Dept. of Mechanical Engineering

    1990-01-01

    Comparisons of the potential cogeneration efficiencies are made, based on energy and exergy analyses, for several devices for electricity generation. The investigation considers several types of fuel cell system (Phosphoric Acid, Alkaline, Solid Polymer Electrolyte, Molten Carbonate and Solid Oxide), and several fossil-fuel and nuclear cogeneration systems based on steam power plants. In the analysis, each system is modelled as a device for which fuel and air enter, and electrical- and thermal-energy products and material and thermal-energy wastes exit. The results for all systems considered indicate that exergy analyses should be used when analysing the cogeneration potential of systems for electricity generation, because they weigh the usefulnesses of heat and electricity on equivalent bases. Energy analyses tend to present overly optimistic views of performance. These findings are particularly significant when large fractions of the heat output from a system are utilized for cogeneration. (author).

  10. Theoretical predictions for hexagonal BN based nanomaterials as electrocatalysts for the oxygen reduction reaction.

    Science.gov (United States)

    Lyalin, Andrey; Nakayama, Akira; Uosaki, Kohei; Taketsugu, Tetsuya

    2013-02-28

    The catalytic activity for the oxygen reduction reaction (ORR) of both the pristine and defect-possessing hexagonal boron nitride (h-BN) monolayer and H-terminated nanoribbon have been studied theoretically using density functional theory. It is demonstrated that an inert h-BN monolayer can be functionalized and become catalytically active by nitrogen doping. It is shown that the energetics of adsorption of O(2), O, OH, OOH, and H(2)O on N atom impurities in the h-BN monolayer (N(B)@h-BN) is quite similar to that known for a Pt(111) surface. The specific mechanism of destructive and cooperative adsorption of ORR intermediates on the surface point defects is discussed. It is demonstrated that accounting for entropy and zero-point energy (ZPE) corrections results in destabilization of the ORR intermediates adsorbed on N(B)@h-BN, while solvent effects lead to their stabilization. Therefore, entropy, ZPE and solvent effects partly cancel each other and have to be taken into account simultaneously. Analysis of the free energy changes along the ORR pathway allows us to suggest that a N-doped h-BN monolayer can demonstrate catalytic properties for the ORR under the condition that electron transport to the catalytically active center is provided.

  11. Physical Protection System Design Analysis against Insider Threat based on Game Theoretic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyo-Nam; Suh, Young-A; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of); Schneider, Erich [The University of Texas, Austin (United States)

    2015-05-15

    This study explores the use of game-theoretic modeling of physical protection analysis by incorporating the implications of an insider threat. The defender-adversary interaction along with the inclusion of an insider is demonstrated using a simplified test case problem at an experimental fast reactor system. Non-detection probability and travel time are used as a baseline of physical protection parameters in this model. As one of the key features of the model is its ability to choose among security upgrades given the constraints of a budget, the study also performed cost benefit analysis for security upgrades options. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three types of adversary paths assisted by the insider and derived the largest insider threat in terms of the budget for security upgrades. More work needs to be done to incorporate complex dimensions of insider threats, which include but are not limited to: a more realistic mapping of insider threat, accounting for information asymmetry between the adversary, insider, and defenders, and assignment of more pragmatic parameter values.

  12. Theoretical investigation of dielectric corona pre-ionization TEA nitrogen laser based on transmission line method

    Science.gov (United States)

    Bahrampour, Alireza; Fallah, Robabeh; Ganjovi, Alireza A.; Bahrampour, Abolfazl

    2007-07-01

    This paper models the dielectric corona pre-ionization, capacitor transfer type of flat-plane transmission line traveling wave transverse excited atmospheric pressure nitrogen laser by a non-linear lumped RLC electric circuit. The flat-plane transmission line and the pre-ionizer dielectric are modeled by a lumped linear RLC and time-dependent non-linear RC circuit, respectively. The main discharge region is considered as a time-dependent non-linear RLC circuit where its resistance value is also depends on the radiated pre-ionization ultra violet (UV) intensity. The UV radiation is radiated by the resistance due to the surface plasma on the pre-ionizer dielectric. The theoretical predictions are in a very good agreement with the experimental observations. The electric circuit equations (including the ionization rate equations), the equations of laser levels population densities and propagation equation of laser intensities, are solved numerically. As a result, the effects of pre-ionizer dielectric parameters on the electrical behavior and output laser intensity are obtained.

  13. Physical Protection System Design Analysis against Insider Threat based on Game Theoretic Modeling

    International Nuclear Information System (INIS)

    Kim, Kyo-Nam; Suh, Young-A; Yim, Man-Sung; Schneider, Erich

    2015-01-01

    This study explores the use of game-theoretic modeling of physical protection analysis by incorporating the implications of an insider threat. The defender-adversary interaction along with the inclusion of an insider is demonstrated using a simplified test case problem at an experimental fast reactor system. Non-detection probability and travel time are used as a baseline of physical protection parameters in this model. As one of the key features of the model is its ability to choose among security upgrades given the constraints of a budget, the study also performed cost benefit analysis for security upgrades options. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three types of adversary paths assisted by the insider and derived the largest insider threat in terms of the budget for security upgrades. More work needs to be done to incorporate complex dimensions of insider threats, which include but are not limited to: a more realistic mapping of insider threat, accounting for information asymmetry between the adversary, insider, and defenders, and assignment of more pragmatic parameter values

  14. Teleseism-based Relative Time Corrections for Modern Analyses of Digitized Analog Seismograms

    Science.gov (United States)

    Lee, T. A.; Ishii, M.

    2017-12-01

    With modern-day instruments and seismic networks timed by GPS systems, synchronization of data streams is all but a forgone conclusion. However, during the analog era, when each station had its own clock, comparing data timing from different stations was a far more daunting prospect. Today, with recently developed methods by which analog data can be digitized, having the ability to accurately reconcile the timings of two separate stations would open decades worth of data to modern analyses. For example, one possible and exciting application would be using noise interferometry with digitized analog data in order to investigate changing structural features (on a volcano for example) over a much longer timescale than was previously possible. With this in mind, we introduce a new approach to sync time between stations based on teleseismic arrivals. P-wave arrivals are identified at stations for pairs of earthquakes from the digital and analog eras that have nearly identical distances, locations, and depths. Assuming accurate timing of the modern data, relative time corrections between a pair of stations can then be inferred for the analog data. This method for time correction depends upon the analog stations having modern equivalents, and both having sufficiently long durations of operation to allow for recording of usable teleseismic events. The Hawaii Volcano Observatory (HVO) network is an especially ideal environment for this, as it not only has a large and well-preserved collection of analog seismograms, but also has a long operating history (1912 - present) with many of the older stations having modern equivalents. As such, the scope of this project is to calculate and apply relative time corrections to analog data from two HVO stations, HILB (1919-present) and UWE (1928-present)(HILB now part of Pacific Tsunami network). Further application of this method could be for investigation of the effects of relative clock-drift, that is, the determining factor for how

  15. Molecular Characterization of Five Potyviruses Infecting Korean Sweet Potatoes Based on Analyses of Complete Genome Sequences

    Directory of Open Access Journals (Sweden)

    Hae-Ryun Kwak

    2015-12-01

    Full Text Available Sweet potatoes (Ipomea batatas L. are grown extensively, in tropical and temperate regions, and are important food crops worldwide. In Korea, potyviruses, including Sweet potato feathery mottle virus (SPFMV, Sweet potato virus C (SPVC, Sweet potato virus G (SPVG, Sweet potato virus 2 (SPV2, and Sweet potato latent virus (SPLV, have been detected in sweet potato fields at a high (~95% incidence. In the present work, complete genome sequences of 18 isolates, representing the five potyviruses mentioned above, were compared with previously reported genome sequences. The complete genomes consisted of 10,081 to 10,830 nucleotides, excluding the poly-A tails. Their genomic organizations were typical of the Potyvirus genus, including one target open reading frame coding for a putative polyprotein. Based on phylogenetic analyses and sequence comparisons, the Korean SPFMV isolates belonged to the strains RC and O with >98% nucleotide sequence identity. Korean SPVC isolates had 99% identity to the Japanese isolate SPVC-Bungo and 70% identity to the SPFMV isolates. The Korean SPVG isolates showed 99% identity to the three previously reported SPVG isolates. Korean SPV2 isolates had 97% identity to the SPV2 GWB-2 isolate from the USA. Korean SPLV isolates had a relatively low (88% nucleotide sequence identity with the Taiwanese SPLV-TW isolates, and they were phylogenetically distantly related to SPFMV isolates. Recombination analysis revealed that possible recombination events occurred in the P1, HC-Pro and NIa-NIb regions of SPFMV and SPLV isolates and these regions were identified as hotspots for recombination in the sweet potato potyviruses.

  16. Choosing where to work at work - towards a theoretical model of benefits and risks of activity-based flexible offices.

    Science.gov (United States)

    Wohlers, Christina; Hertel, Guido

    2017-04-01

    Although there is a trend in today's organisations to implement activity-based flexible offices (A-FOs), only a few studies examine consequences of this new office type. Moreover, the underlying mechanisms why A-FOs might lead to different consequences as compared to cellular and open-plan offices are still unclear. This paper introduces a theoretical framework explaining benefits and risks of A-FOs based on theories from work and organisational psychology. After deriving working conditions specific for A-FOs (territoriality, autonomy, privacy, proximity and visibility), differences in working conditions between A-FOs and alternative office types are proposed. Further, we suggest how these differences in working conditions might affect work-related consequences such as well-being, satisfaction, motivation and performance on the individual, the team and the organisational level. Finally, we consider task-related (e.g. task variety), person-related (e.g. personality) and organisational (e.g. leadership) moderators. Based on this model, future research directions as well as practical implications are discussed. Practitioner Summary: Activity-based flexible offices (A-FOs) are popular in today's organisations. This article presents a theoretical model explaining why and when working in an A-FO evokes benefits and risks for individuals, teams and organisations. According to the model, A-FOs are beneficial when management encourages employees to use the environment appropriately and supports teams.

  17. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  18. Development and Validation of a Theoretically Based, Multidimensional Questionnaire of Student Evaluation of University Teaching

    Science.gov (United States)

    Lemos, M. S.; Queiros, C.; Teixeira, P. M.; Menezes, I.

    2011-01-01

    The authors describe the development and validation of a multidimensional instrument of students' evaluation of university teaching (the Pedagogical Questionnaire of the University of Porto). The goal was to develop an instrument based on a sound psychometric analysis and simultaneously supported by the learning theory. Based on the data from 4875…

  19. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    Directory of Open Access Journals (Sweden)

    Saleh Alwahaishi

    2013-03-01

    Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers

  20. A theoretical investigation of spectra utilization for a CMOS based indirect detector for dual energy applications

    International Nuclear Information System (INIS)

    Kalyvas, N; Michail, C; Valais, I; Kandarakis, I; Fountos, G; Martini, N; Koukou, V; Sotiropoulou, P

    2015-01-01

    Dual Energy imaging is a promising method for visualizing masses and microcalcifications in digital mammography. Currently commercially available detectors may be suitable for dual energy mammographic applications. The scope of this work was to theoretically examine the performance of the Radeye CMOS digital indirect detector under three low- and high-energy spectral pairs. The detector was modeled through the linear system theory. The pixel size was equal to 22.5μm and the phosphor material of the detector was a 33.9 mg/cm 2 Gd 2 O 2 S:Tb phosphor screen. The examined spectral pairs were (i) a 40kV W/Ag (0.01cm) and a 70kV W/Cu (0.1cm) target/filter combinations, (ii) a 40kV W/Cd (0.013cm) and a 70kV W/Cu (0.1cm) target/filter combinations and (iii) a 40kV W/Pd (0.008cm) and a 70kV W/Cu (0.1cm) target/filter combinations. For each combination the Detective Quantum Efficiency (DQE), showing the signal to noise ratio transfer, the detector optical gain (DOG), showing the sensitivity of the detector and the coefficient of variation (CV) of the detector output signal were calculated. The second combination exhibited slightly higher DOG (326 photons per X-ray) and lower CV (0.755%) values. In terms of electron output from the RadEye CMOS, the first two combinations demonstrated comparable DQE values; however the second combination provided an increase of 6.5% in the electron output. (paper)

  1. Theoretical Interpretation of Modular Artistic Forms Based on the Example of Contemporarylithuanian Architecture

    Directory of Open Access Journals (Sweden)

    Aušra Černauskienė

    2015-05-01

    Full Text Available The article analyses modular artistic forms that emerge in all scale structures of contemporary architecture. Module, as a standard unit of measure has been in use since antiquity. It gained even more significance amid innovative building and computing technologies of the 20th and 21st centuries. Static and fixed perceptions of a module were supplemented with concepts of dynamic and adaptable modular units, such as fractals, parameters and algorithms. Various expressions and trends of modular design appear in contemporary architecture of Lithuania, where modular forms consist of repetitive spatial and planar elements. Spatial modules as blocks or flats and planar modular wall elements are a characteristic expression of the contemporary architecture in Lithuania.

  2. N,N′-Dipyridoxyl(1,8-diamino-3,6-dioxaoctane Schiff-base: Synthesis, experimental and theoretical identification

    Directory of Open Access Journals (Sweden)

    M.J. Khoshkholgh

    2013-05-01

    Full Text Available The N,N′-dipyridoxyl(1,8-diamino-3,6-dioxaoctane (=H2L Schiff-base has been synthesized and characterized by IR, 1H NMR, mass spectrometry and elemental analysis. Its optimized geometry and theoretical vibrational frequencies have been computed using density functional theory (DFT method via the B3LYP functional. Also, its 1H and 13C NMR chemical shifts have been calculated at the same computational level. In optimized geometry of the H2L, the two pyridine rings are perpendicular to each other. The phenolic hydrogens are engaged in intramolecular-hydrogen bonds with the azomethine nitrogens.

  3. Simultaneous acquisition of dual analyser-based phase contrast X-ray images for small animal imaging

    International Nuclear Information System (INIS)

    Kitchen, Marcus J.; Pavlov, Konstantin M.; Hooper, Stuart B.; Vine, David J.; Siu, Karen K.W.; Wallace, Megan J.; Siew, Melissa L.L.; Yagi, Naoto; Uesugi, Kentaro; Lewis, Rob A.

    2008-01-01

    Analyser-based phase contrast X-ray imaging can provide high-contrast images of biological tissues with exquisite sensitivity to the boundaries between tissues. The phase and absorption information can be extracted by processing multiple images acquired at different analyser orientations. Recording both the transmitted and diffracted beams from a thin Laue analyser crystal can make phase retrieval possible for dynamic systems by allowing full field imaging. This technique was used to image the thorax of a mechanically ventilated newborn rabbit pup using a 25 keV beam from the SPring-8 synchrotron radiation facility. The diffracted image was produced from the (1 1 1) planes of a 50 mm x 40 mm, 100 μm thick Si analyser crystal in the Laue geometry. The beam and analyser were large enough to image the entire chest, making it possible to observe changes in anatomy with high contrast and spatial resolution

  4. Simultaneous acquisition of dual analyser-based phase contrast X-ray images for small animal imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kitchen, Marcus J. [School of Physics, Monash University, Victoria 3800 (Australia)], E-mail: Marcus.Kitchen@sci.monash.edu.au; Pavlov, Konstantin M. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia); Physics and Electronics, School of Science and Technology, University of New England, NSW 2351 (Australia)], E-mail: Konstantin.Pavlov@sci.monash.edu.au; Hooper, Stuart B. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Stuart.Hooper@med.monash.edu.au; Vine, David J. [School of Physics, Monash University, Victoria 3800 (Australia)], E-mail: David.Vine@sci.monash.edu.au; Siu, Karen K.W. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia)], E-mail: Karen.Siu@sci.monash.edu.au; Wallace, Megan J. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Megan.Wallace@med.monash.edu.au; Siew, Melissa L.L. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Melissa.Siew@med.monash.edu.au; Yagi, Naoto [SPring-8/JASRI, Sayo (Japan)], E-mail: yagi@spring8.or.jp; Uesugi, Kentaro [SPring-8/JASRI, Sayo (Japan)], E-mail: ueken@spring8.or.jp; Lewis, Rob A. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia)], E-mail: Rob.Lewis@sync.monash.edu.au

    2008-12-15

    Analyser-based phase contrast X-ray imaging can provide high-contrast images of biological tissues with exquisite sensitivity to the boundaries between tissues. The phase and absorption information can be extracted by processing multiple images acquired at different analyser orientations. Recording both the transmitted and diffracted beams from a thin Laue analyser crystal can make phase retrieval possible for dynamic systems by allowing full field imaging. This technique was used to image the thorax of a mechanically ventilated newborn rabbit pup using a 25 keV beam from the SPring-8 synchrotron radiation facility. The diffracted image was produced from the (1 1 1) planes of a 50 mm x 40 mm, 100 {mu}m thick Si analyser crystal in the Laue geometry. The beam and analyser were large enough to image the entire chest, making it possible to observe changes in anatomy with high contrast and spatial resolution.

  5. DNA bases assembled on the Au(110)/electrolyte interface: A combined experimental and theoretical study

    DEFF Research Database (Denmark)

    Salvatore, Princia; Nazmutdinov, Renat R.; Ulstrup, Jens

    2015-01-01

    Among the low-index single-crystal gold surfaces, the Au(110) surface is the most active toward molecular adsorption and the one with fewest electrochemical adsorption data reported. Cyclic voltammetry (CV), electrochemically controlled scanning tunneling microscopy (EC-STM), and density functional......, accompanied by a pair of strong voltammetry peaks in the double-layer region in acid solutions. Adsorption of the DNA bases gives featureless voltammograms with lower double-layer capacitance, suggesting that all the bases are chemisorbed on the Au(110) surface. Further investigation of the surface structures...... of the adlayers of the four DNA bases by EC-STM disclosed lifting of the Au(110) reconstruction, specific molecular packing in dense monolayers, and pH dependence of the A and G adsorption. DFT computations based on a cluster model for the Au(110) surface were performed to investigate the adsorption energy...

  6. CONVEC: a computer program for transient incompressible fluid flow based on quadratic finite elements. Part 1: theoretical aspects

    International Nuclear Information System (INIS)

    Laval, H.

    1981-01-01

    This report describes the theoretical and numerical aspects of the finite element computer code CONVEC designed for the transient analysis of two-dimensional plane or three-dimensional axisymmetric incompressible flows including the effects of heat transfer. The governing equations for the above class of problems are the time-dependent incompressible Navier-Stokes equations and the thermal energy equation. The general class of flow problems analysed by CONVEC is discussed and the equations for the initial-boundary value problem are represented. A brief description of the finite element method and the weighted residual formulation is presented. The numerical solution of the incompressible equations is achieved by using a fractional step method. The mass lumping process associated with an explicit time integration scheme is described. The time integration is analysed and the stability conditions are derived. Numerical applications are presented. Standard problems of natural and forced convection are solved and the solutions obtained are compared with other numerical solutions published in the literature

  7. A communication theoretical analysis of FRET-based mobile ad hoc molecular nanonetworks.

    Science.gov (United States)

    Kuscu, Murat; Akan, Ozgur B

    2014-09-01

    Nanonetworks refer to a group of nanosized machines with very basic operational capabilities communicating to each other in order to accomplish more complex tasks such as in-body drug delivery, or chemical defense. Realizing reliable and high-rate communication between these nanomachines is a fundamental problem for the practicality of these nanonetworks. Recently, we have proposed a molecular communication method based on Förster Resonance Energy Transfer (FRET) which is a nonradiative excited state energy transfer phenomenon observed among fluorescent molecules, i.e., fluorophores. We have modeled the FRET-based communication channel considering the fluorophores as single-molecular immobile nanomachines, and shown its reliability at high rates, and practicality at the current stage of nanotechnology. In this study, for the first time in the literature, we investigate the network of mobile nanomachines communicating through FRET. We introduce two novel mobile molecular nanonetworks: FRET-based mobile molecular sensor/actor nanonetwork (FRET-MSAN) which is a distributed system of mobile fluorophores acting as sensor or actor node; and FRET-based mobile ad hoc molecular nanonetwork (FRET-MAMNET) which consists of fluorophore-based nanotransmitter, nanoreceivers and nanorelays. We model the single message propagation based on birth-death processes with continuous time Markov chains. We evaluate the performance of FRET-MSAN and FRET-MAMNET in terms of successful transmission probability and mean extinction time of the messages, system throughput, channel capacity and achievable communication rates.

  8. SeeSway - A free web-based system for analysing and exploring standing balance data.

    Science.gov (United States)

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate

  9. New insights into survival trend analyses in cancer population-based studies: the SUDCAN methodology.

    Science.gov (United States)

    Uhry, Zoé; Bossard, Nadine; Remontet, Laurent; Iwaz, Jean; Roche, Laurent

    2017-01-01

    The main objective of the SUDCAN study was to compare, for 15 cancer sites, the trends in net survival and excess mortality rates from cancer 5 years after diagnosis between six European Latin countries (Belgium, France, Italy, Portugal, Spain and Switzerland). The data were extracted from the EUROCARE-5 database. The study period ranged from 6 (Portugal, 2000-2005) to 18 years (Switzerland, 1989-2007). Trend analyses were carried out separately for each country and cancer site; the number of cases ranged from 1500 to 104 000 cases. We developed an original flexible excess rate modelling strategy that accounts for the continuous effects of age, year of diagnosis, time since diagnosis and their interactions. Nineteen models were constructed; they differed in the modelling of the effect of the year of diagnosis in terms of linearity, proportionality and interaction with age. The final model was chosen according to the Akaike Information Criterion. The fit was assessed graphically by comparing model estimates versus nonparametric (Pohar-Perme) net survival estimates. Out of the 90 analyses carried out, the effect of the year of diagnosis on the excess mortality rate depended on age in 61 and was nonproportional in 64; it was nonlinear in 27 out of the 75 analyses where this effect was considered. The model fit was overall satisfactory. We analysed successfully 15 cancer sites in six countries. The refined methodology proved necessary for detailed trend analyses. It is hoped that three-dimensional parametric modelling will be used more widely in net survival trend studies as it has major advantages over stratified analyses.

  10. EVIDENCE-BASED ASSESSMENT OF VOICE DISORDERS: A THEORETICAL OVERVIEW AND MODEL

    Directory of Open Access Journals (Sweden)

    Dobrinka GEORGIEVA

    2011-04-01

    Full Text Available This article deals with the current paradigm of evidence-based practices of the Speech Therapy (Speech language pathology, especially diagnosing based on evidences of voice disorders. One of the main goals of this article is to define voice disorders according to the World Health Organization’s ICF multidimensional concept. Using a comparative method, this study attempts to prove that traditionally, the assessment outcomes of voice disorders in the Speech Therapy have been largely based on the speech therapist’s point of view and never on the client’s position. The research insists on establishing and adopting definitive gold standards, with respect to voice assessment and therapy in Bulgaria.

  11. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.; Wang, Bo; Lubineau, Gilles

    2016-01-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances

  12. Cross Cultural Differences in Managers’ Support for Home-based Telework : A Theoretical Elaboration

    NARCIS (Netherlands)

    Peters, Pascale; Dulk, Laura den

    2003-01-01

    Home-based telework is one of the arrangements organizations can introduce to facilitate a better balance between employees’ professional and private lives. This article focuses on the question of under what conditions managers grant a subordinate’s request to telework and what role national

  13. Theoretical treatment of molecular photoionization based on the R-matrix method

    International Nuclear Information System (INIS)

    Tashiro, Motomichi

    2012-01-01

    The R-matrix method was implemented to treat molecular photoionization problem based on the UK R-matrix codes. This method was formulated to treat photoionization process long before, however, its application has been mostly limited to photoionization of atoms. Application of the method to valence photoionization as well as inner-shell photoionization process will be presented.

  14. Development of a Theoretically Based Treatment for Sentence Comprehension Deficits in Individuals with Aphasia

    Science.gov (United States)

    Kiran, Swathi; Caplan, David; Sandberg, Chaleece; Levy, Joshua; Berardino, Alex; Ascenso, Elsa; Villard, Sarah; Tripodis, Yorghos

    2012-01-01

    Purpose: Two new treatments, 1 based on sentence to picture matching (SPM) and the other on object manipulation (OM), that train participants on the thematic roles of sentences using pictures or by manipulating objects were piloted. Method: Using a single-subject multiple-baseline design, sentence comprehension was trained on the affected sentence…

  15. Community-Based Urban Teacher Education: Theoretical Frameworks and Practical Considerations for Developing Promising Practices

    Science.gov (United States)

    Noel, Jana

    2016-01-01

    Traditional campus-based teacher education programs, located on college or university campuses, have been criticized for being removed from the "real world" of community life, and a number of programs have moved directly into urban communities in order for preservice teachers to become immersed in the life of the community. This article…

  16. Theoretical evaluation on selective adsorption characteristics of alkali metal-based sorbents for gaseous oxidized mercury.

    Science.gov (United States)

    Tang, Hongjian; Duan, Yufeng; Zhu, Chun; Cai, Tianyi; Li, Chunfeng; Cai, Liang

    2017-10-01

    Alkali metal-based sorbents are potential for oxidized mercury (Hg 2+ ) selective adsorption but show hardly effect to elemental mercury (Hg 0 ) in flue gas. Density functional theory (DFT) was employed to investigate the Hg 0 and HgCl 2 adsorption mechanism over alkali metal-based sorbents, including calcium oxide (CaO), magnesium oxide (MgO), potassium chloride (KCl) and sodium chloride (NaCl). Hg 0 was found to weakly interact with CaO (001), MgO (001), KCl (001) and NaCl (001) surfaces while HgCl 2 was effectively adsorbed on top-O and top-Cl sites. Charge transfer and bond population were calculated to discuss the covalency and ionicity of HgCl 2 bonding with the adsorption sites. The partial density of states (PDOS) analysis manifests that HgCl 2 strongly interacts with surface sites through the orbital hybridizations between Hg and top O or Cl. Frontier molecular orbital (FMO) energy and Mulliken electronegativity are introduced as the quantitative criteria to evaluate the reactivity of mercury species and alkali metal-based sorbents. HgCl 2 is identified as a Lewis acid and more reactive than Hg 0 . The Lewis basicity of the four alkali metal-based sorbents is predicted as the increasing order: NaCl < MgO < KCl < CaO, in consistence with the trend of HgCl 2 adsorption energies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Theoretical analysis of an iron mineral-based magnetoreceptor model in birds

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Greiner, Walter

    2007-01-01

    Sensing the magnetic field has been established as an essential part of navigation and orientation of various animals for many years. Only recently has the first detailed receptor concept for magnetoreception been published based on histological and physical results. The considered mechanism...

  18. A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

    Science.gov (United States)

    2016-09-01

    performance study of these algorithms in the particular problem of analysing backscatter signals from rotating blades. The report is organised as follows...provide further insight into the behaviour of the techniques. Here, the algorithms for MP, OMP, CGP, gOMP and ROMP terminate when 10 atoms are

  19. Graphene-Based FET Detector for E. coli K12 Real-Time Monitoring and Its Theoretical Analysis

    Directory of Open Access Journals (Sweden)

    Jieyi Zhu

    2016-01-01

    Full Text Available This paper presents a theoretical analysis for a graphene-based FET real-time detector of the target bacteria E. coli K12. The motivation for this study is to design a sensor device for detection of bacteria in food and water in order to guarantee food safety. Graphene is chosen as our material for sensor design, which has outstanding electrical, physical, and optical performance. In our sensor structure, graphene-based solution gate field effect transistor (FET is the device model; fabrication and functionalization protocol are presented together in this paper. What is more, a real-time signal display system is the accompanied equipment for our designed biosensor device. In this system, the sensor bias current signal Ids would change obviously when the target bacteria are attached to the sensor surface. And the bias current Ids increases when the E. coli concentration increases. In the latter part, a theoretical interpretation of the sensor signal is to explain the bias current Ids increasing after the E. coli K12 attachment.

  20. High-pressure behaviour of selenium-based spinels and related structures - an experimental and theoretical study

    International Nuclear Information System (INIS)

    Waskowska, A; Gerward, L; Olsen, J Staun; Feliz, M; Llusar, R; Gracia, L; Marques, M; Recio, J M

    2004-01-01

    The high-pressure structural behaviour of the cubic spinel CdCr 2 Se 4 (space group Fd3barm) and tetragonal CdGa 2 Se 4 (I4bar) has been investigated experimentally and theoretically in order to understand the large difference in compressibility between the two selenides. The experimental values of the bulk modulus for these compounds are 101(2) and 48(2) GPa, respectively. These values compare well with 92 and 44 GPa obtained from first-principles calculations based on the density functional theory formalism. The observed difference in compressibility between the cubic and tetragonal structures can be understood in terms of polyhedral analysis. In a hypothetical cubic spinel structure Fd3barm), the calculated bulk modulus for CdGa 2 Se 4 is 85 GPa. This value together with the experimental and theoretical results for CdCr 2 Se 4 suggest that the selenium-based cubic spinels should have a bulk modulus about 100 GPa, which is half the value found for the oxide spinels

  1. Power allocation for target detection in radar networks based on low probability of intercept: A cooperative game theoretical strategy

    Science.gov (United States)

    Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang

    2017-08-01

    Distributed radar network systems have been shown to have many unique features. Due to their advantage of signal and spatial diversities, radar networks are attractive for target detection. In practice, the netted radars in radar networks are supposed to maximize their transmit power to achieve better detection performance, which may be in contradiction with low probability of intercept (LPI). Therefore, this paper investigates the problem of adaptive power allocation for radar networks in a cooperative game-theoretic framework such that the LPI performance can be improved. Taking into consideration both the transmit power constraints and the minimum signal to interference plus noise ratio (SINR) requirement of each radar, a cooperative Nash bargaining power allocation game based on LPI is formulated, whose objective is to minimize the total transmit power by optimizing the power allocation in radar networks. First, a novel SINR-based network utility function is defined and utilized as a metric to evaluate power allocation. Then, with the well-designed network utility function, the existence and uniqueness of the Nash bargaining solution are proved analytically. Finally, an iterative Nash bargaining algorithm is developed that converges quickly to a Pareto optimal equilibrium for the cooperative game. Numerical simulations and theoretic analysis are provided to evaluate the effectiveness of the proposed algorithm.

  2. Mean field based calculations with the Gogny force: Some theoretical tools to explore the nuclear structure

    Energy Technology Data Exchange (ETDEWEB)

    Peru, S. [CEA, DAM, DIF, Arpajon (France); Martini, M. [Ghent University, Department of Physics and Astronomy, Gent (Belgium); CEA, DAM, DIF, Arpajon (France); Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2014-05-15

    We present a review of several works using the finite-range Gogny interaction in mean field approaches and beyond to explore the most striking nuclear structure features. Shell evolution along the N = 16, 20, 28, 40 isotopic chains is investigated. The static deformation obtained in the mean field description are shown to be often in disagreement with the one experimentally determined. Dynamics is addressed in a GCM-like method, including rotational degrees of freedom, namely the five-dimension collective Hamiltonian (5DCH). This framework allows the description of the low-energy collective excitations. Nevertheless, some data cannot be reproduced with the collective Hamiltonian approach. Thus the QRPA formalism is introduced and used to simultaneously describe high- and low-energy spectroscopy as well as collective and individual excitations. After the description of giant resonances in doubly magic exotic nuclei, the role of the intrinsic deformation in giant resonances is presented. The appearance of low-energy dipole resonances in light nuclei is also discussed. In particular the isoscalar or isovector nature of Pygmy states is debated. Then, the first microscopic fully coherent description of the multipole spectrum of heavy deformed nucleus {sup 238}U is presented. Finally, a comparison of the low-energy spectrum obtained within the two extensions of the static mean field, namely QRPA and 5DCH, is performed for 2{sup +} states in N = 16 isotones, nickel and tin isotopes. For the first time the different static and dynamic factors involved in the generation of the 2{sup +} states in the nickel isotopic chain, from drip line to drip line, can be analysed in only one set of coherent approaches, free of adjustable parameters, using the same two-body interaction D1S and the resulting HFB mean field. (orig.)

  3. Theoretical training bases for young athletes in aquatic sports on the natural environment: Bodyboard.

    Directory of Open Access Journals (Sweden)

    Marcos Mecías Calvo

    2015-09-01

    Full Text Available The bodyboard is a surfing discipline whose growth has been considerably since the 60s, so it is considered one of the fastest growing aquatic sport in the world. Despite this, scientific research of this discipline has been reflected poorly compared to other sports. As in any other sport, the bodyboarder requires of specific physical and physiological conditions to help it to practice the sport effectively as it does not follow a specific training or develop conditioning programs. Therefore, this article comes up with the idea of providing a basis for determining the most appropriate training based on study objectives and bodyboard actions to improve physical, technical and psychological condition of the bodyboarders based on the particularities of their own sport and the athlete, taking into account scientific studies in the field at hand: the Bodyboard.

  4. The New Community Policing: Developing a Partnership-Based Theoretical Foundation

    Directory of Open Access Journals (Sweden)

    Adam J McKee

    2016-10-01

    Full Text Available This paper presents a Partnership Model of Community Policing based on Partnership concepts developed by Riane Eisler and undergirded by Cultural Transformation Theory as a guiding principle (1987, 2010, 2013. This model is more reflective of the daily lived experiences of community police officers. It is culturally relevant and based on the whole of the police officer’s relationship with the community within the context in which the interactions occur. This "New Community Policing" is an extension of Riane Eisler’s Cultural Transformation Theory and is an attempt to answer her call for a movement towards a partnership model of social organization. Ultimately, "8 Pillars of the New Community Policing" are developed to aid in defining and implementing community policing.

  5. Basic Concept and Theoretical Study of Condition-based Maintenance for Power Transmission System

    Institute of Scientific and Technical Information of China (English)

    LIMing; HAN Xueshan; YANG Ming; GUO Zhihong

    2011-01-01

    The appropriate maintenance time for the single equipment can be found easily and efficiently under the background of condition-based maintenance. However, from the perspective of the whole power system, discrepancy between equipment individual and the whole power system would appear. Once this discrepancy can not be coordinated, it will certainly cause contradiction and conflict between individual equipment and the whole system, and lose the integral efficiency. To solve this contradiction and conflicts is of significant meaning.

  6. Theoretical investigation of tautomeric equilibrium in ortho-hydroxy phenyl Schiff bases

    Science.gov (United States)

    Kluba, M.; Lipkowski, P.; Filarowski, A.

    2008-10-01

    This Letter presents a study of the tautomeric equilibrium in ortho-hydroxy phenyl Schiff bases. The influence of substitution and solvent (simulated by the self-consistent reaction field model, SCRF) on the energy barrier of the transition state and on proton transfer is investigated. Dependencies of the HOMA and HOSE aromaticity indices on the molecular, transition state, and proton transfer forms were obtained. The state of chelate chain and phenyl ring aromaticity depending on the tautomeric equilibrium is studied.

  7. Sampling in interview-based qualitative research: A theoretical and practical guide

    OpenAIRE

    Robinson, Oliver

    2014-01-01

    Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...

  8. Theoretical study of a novel solar trigeneration system based on metal hydrides

    International Nuclear Information System (INIS)

    Meng, Xiangyu; Yang, Fusheng; Bao, Zewei; Deng, Jianqiang; Serge, Nyallang N.; Zhang, Zaoxiao

    2010-01-01

    In order to utilize the low grade heat energy efficiently, the preliminary scheme of a metal hydride based Combined Cooling, Heating and Power (CCHP) system driven by solar energy and industrial waste heat was proposed, in which both refrigeration and power generation are achieved. Following a step-by-step procedure recently developed by the authors, two pairs of metal hydrides were selected for the CCHP system. The working principle of the system was discussed in detail and further design of the configuration for CCHP was conducted. Based on the cycle mentioned above, the models of energy conversion and exergy analysis were set up. The multi-element valued method was used to assess the performance of the CCHP system in a whole sense, thus the analysis of influence factors on the system performance can be carried out. The typical climate conditions of Xi'an in 2005 were taken for discussion, and the results showed that the system performance is mainly affected by the quantity of solar radiation energy. The objective of the system's optimization is to increase the exergy efficiency of the metal hydride heat pump, based on the quantity of solar radiation energy. The comparison with two different traditional types of CCHP systems proved that the novel CCHP system is superior to the traditional CCHP systems concerning the integrated performance.

  9. Theoretical Maxwell's Equations, Gauge Field and Their Universality Based on One Conservation Law

    Institute of Scientific and Technical Information of China (English)

    Liu Changmao

    2005-01-01

    The notion of the inner product of vectors is extended to tensors of different orders, which may replace the vector product usually. The essences of the differential and the codifferential forms are pointed out: they represent the tangent surface and the normal surface fluxes of a tensor, respectively. The definitions of the divergence and the curl of a 2D surface flux of a tensor are obtained.Maxwell's equations, namely, the construction law of field, which were usually established based on two conservation laws of electric charge and imaginary magnetic charge, are derived by the author only by using one conservation law ( mass or fluid flux quantity and so on) and the feature of central field ( or its composition). By the feature of central field ( or its composition), the curl of 2D flux is zero. Both universality of gauge field and the difficulty of magnetic monopole theory ( a magnetic monopole has no effect on electric current just like a couple basing no effect on the sum of forces) are presented: magnetic monopole has no the feature of magnet. Finally it is pointed out that the base of relation of mass and energy is already involved in Maxwell's equations.

  10. Investigation of attractive and repulsive interactions associated with ketones in supercritical CO2, based on Raman spectroscopy and theoretical calculations.

    Science.gov (United States)

    Kajiya, Daisuke; Saitow, Ken-ichi

    2013-08-07

    Carbonyl compounds are solutes that are highly soluble in supercritical CO2 (scCO2). Their solubility governs the efficiency of chemical reactions, and is significantly increased by changing a chromophore. To effectively use scCO2 as solvent, it is crucial to understand the high solubility of carbonyl compounds, the solvation structure, and the solute-solvent intermolecular interactions. We report Raman spectroscopic data, for three prototypical ketones dissolved in scCO2, and four theoretical analyses. The vibrational Raman spectra of the C=O stretching modes of ketones (acetone, acetophenone, and benzophenone) were measured in scCO2 along the reduced temperature Tr = T∕Tc = 1.02 isotherm as a function of the reduced density ρr = ρ∕ρc in the range 0.05-1.5. The peak frequencies of the C=O stretching modes shifted toward lower energies as the fluid density increased. The density dependence was analyzed by using perturbed hard-sphere theory, and the shift was decomposed into attractive and repulsive energy components. The attractive energy between the ketones and CO2 was up to nine times higher than the repulsive energy, and its magnitude increased in the following order: acetone attractive energy and optimized the relative configuration between each solute and CO2. According to theoretical calculations for the dispersion energy, the dipole-induced-dipole interaction energy, and the frequency shift due to their interactions, the experimentally determined attractive energy differences in the three solutes were attributed to the dispersion energies that depended on a chromophore attached to the carbonyl groups. It was found that the major intermolecular interaction with the attractive shift varied from dipole-induced dipole to dispersion depending on the chromophore in the ketones in scCO2. As the common conclusion for the Raman spectral measurements and the four theoretical calculations, solute polarizability, modified by the chromophore, was at the core of

  11. Theoretical and historical bases of formation of social and economic nature of consumer cooperation

    Directory of Open Access Journals (Sweden)

    Алла Іванівна Мілька

    2015-09-01

    Full Text Available The essence of cooperation and cooperative approaches to address these concepts and the importance of social and economic component at different stages of development of the cooperative movement and the theory of cooperation are described in the article. Opinions of the scientists about the non-profit nature of the cooperative are considered and researched. Based on the study the main conceptual provisions of cooperation are highlighted and proposed the author's definition of consumer cooperatives is proposed on the basis of the research 

  12. Theoretical foundation for jung's “Mandala Symbolism” based on discrete chaotic dynamics of interacting neurons

    Directory of Open Access Journals (Sweden)

    V. Gontar

    2000-01-01

    Full Text Available Based on discrete chaotic dynamics algorithms different patterns in a form of mandalas have been generated. This fact gives us the possibility to make a link between mechanism of biochemical reaction dynamics undergoing in brain resulted to the brain creativity process in form of mandalas. Obtained patterns can be related to the space distributed chemicals according to the law of extended principle of maximum entropy, consideration of the information exchange during biochemical transformations, mass conservation law and discrete chaotic dynamics principles.

  13. Randomised controlled trial of a theoretically grounded tailored intervention to diffuse evidence-based public health practice [ISRCTN23257060

    Directory of Open Access Journals (Sweden)

    Nordheim Lena

    2003-03-01

    Full Text Available Abstract Background Previous studies have shown that Norwegian public health physicians do not systematically and explicitly use scientific evidence in their practice. They work in an environment that does not encourage the integration of this information in decision-making. In this study we investigate whether a theoretically grounded tailored intervention to diffuse evidence-based public health practice increases the physicians' use of research information. Methods 148 self-selected public health physicians were randomised to an intervention group (n = 73 and a control group (n = 75. The intervention group received a multifaceted intervention while the control group received a letter declaring that they had access to library services. Baseline assessments before the intervention and post-testing immediately at the end of a 1.5-year intervention period were conducted. The intervention was theoretically based and consisted of a workshop in evidence-based public health, a newsletter, access to a specially designed information service, to relevant databases, and to an electronic discussion list. The main outcome measure was behaviour as measured by the use of research in different documents. Results The intervention did not demonstrate any evidence of effects on the objective behaviour outcomes. We found, however, a statistical significant difference between the two groups for both knowledge scores: Mean difference of 0.4 (95% CI: 0.2–0.6 in the score for knowledge about EBM-resources and mean difference of 0.2 (95% CI: 0.0–0.3 in the score for conceptual knowledge of importance for critical appraisal. There were no statistical significant differences in attitude-, self-efficacy-, decision-to-adopt- or job-satisfaction scales. There were no significant differences in Cochrane library searching after controlling for baseline values and characteristics. Conclusion Though demonstrating effect on knowledge the study failed to provide support for

  14. Theoretical studies of the work functions of Pd-based bimetallic surfaces

    International Nuclear Information System (INIS)

    Ding, Zhao-Bin; Wu, Feng; Wang, Yue-Chao; Jiang, Hong

    2015-01-01

    Work functions of Pd-based bimetallic surfaces, including mainly M/Pd(111), Pd/M, and Pd/M/Pd(111) (M = 4d transition metals, Cu, Au, and Pt), are studied using density functional theory. We find that the work function of these bimetallic surfaces is significantly different from that of parent metals. Careful analysis based on Bader charges and electron density difference indicates that the variation of the work function in bimetallic surfaces can be mainly attributed to two factors: (1) charge transfer between the two different metals as a result of their different intrinsic electronegativity, and (2) the charge redistribution induced by chemical bonding between the top two layers. The first factor can be related to the contact potential, i.e., the work function difference between two metals in direct contact, and the second factor can be well characterized by the change in the charge spilling out into vacuum. We also find that the variation in the work functions of Pd/M/Pd(111) surfaces correlates very well with the variation of the d-band center of the surface Pd atom. The findings in this work can be used to provide general guidelines to design new bimetallic surfaces with desired electronic properties

  15. Comparison study on transformation of iron oxyhydroxides: Based on theoretical and experimental data

    International Nuclear Information System (INIS)

    Lu Bin; Guo Hui; Li Ping; Liu Hui; Wei Yu; Hou Denglu

    2011-01-01

    We have investigated the catalytic transformation of ferrihydrite, feroxyhyte, and lepidocrocite in the presence of Fe(II). In this paper, the transformation from akaganeite and goethite to hematite in the presence of trace Fe(II) was studied in detail. The result indicates that trace Fe(II) can accelerate the transformation of akaganeite and goethite. Compared with the transformation of other iron oxyhydroxides (e.g., ferrihydrite, feroxyhyte, lepidocrocite, and akaganeite), a complete transformation from goethite to hematite was not observed in the presence of Fe(II). On the basis of our earlier and present experimental results, the transformation of various iron oxyhydroxides was compared based on their thermodynamic stability, crystalline structure, transformation mechanism, and transformation time. - Graphical abstract: The transformation of various iron oxyhydroxides in the presence of trace Fe(II) was compared based on experimental results, thermodynamic stability, crystalline structure, and transformation mechanism. Highlights: → Fe(II) can accelerate the transformation from akaganeite to hematite. → Small particles of goethite can transform to hematite in the presence of Fe(II). → Some hematite particles were found to be embedded within the crystal of goethite. → The relationship between structure and transformation mechanism was revealed.

  16. Theoretical assumptions of Maffesoli's sensitivity and Problem-Based Learning in Nursing Education

    Directory of Open Access Journals (Sweden)

    María-Aurora Rodríguez-Borrego

    2014-06-01

    Full Text Available OBJECTIVE: understand the everyday and the imaginary of Nursing students in their knowledge socialization process through the Problem-Based Learning (PBL strategy.METHOD: Action Research, involving 86 students from the second year of an undergraduate Nursing program in Spain. A Critical Incident Questionnaire and Group interview were used. Thematic/categorical analysis, triangulation of researchers, subjects and techniques.RESULTS: the students signal the need to have a view from within, reinforcing the criticism against the schematic dualism; PBL allows one to learn how to be with the other, with his mechanical and organic solidarity; the feeling together, with its emphasis on learning to work in group and wanting to be close to the person taking care.CONCLUSIONS: The great contradictions the protagonists of the process, that is, the students experience seem to express that group learning is not a form of gaining knowledge, as it makes them lose time to study. The daily, the execution time and the imaginary of how learning should be do not seem to have an intersection point in the use of Problem-Based Learning. The importance of focusing on the daily and the imaginary should be reinforced when we consider nursing education.

  17. Theoretical assumptions of Maffesoli's sensitivity and Problem-Based Learning in Nursing Education1

    Science.gov (United States)

    Rodríguez-Borrego, María-Aurora; Nitschke, Rosane Gonçalves; do Prado, Marta Lenise; Martini, Jussara Gue; Guerra-Martín, María-Dolores; González-Galán, Carmen

    2014-01-01

    Objective understand the everyday and the imaginary of Nursing students in their knowledge socialization process through the Problem-Based Learning (PBL) strategy. Method Action Research, involving 86 students from the second year of an undergraduate Nursing program in Spain. A Critical Incident Questionnaire and Group interview were used. Thematic/categorical analysis, triangulation of researchers, subjects and techniques. Results the students signal the need to have a view from within, reinforcing the criticism against the schematic dualism; PBL allows one to learn how to be with the other, with his mechanical and organic solidarity; the feeling together, with its emphasis on learning to work in group and wanting to be close to the person taking care. Conclusions The great contradictions the protagonists of the process, that is, the students experience seem to express that group learning is not a form of gaining knowledge, as it makes them lose time to study. The daily, the execution time and the imaginary of how learning should be do not seem to have an intersection point in the use of Problem-Based Learning. The importance of focusing on the daily and the imaginary should be reinforced when we consider nursing education. PMID:25029064

  18. Molecular systematics of Indian Alysicarpus (Fabaceae) based on analyses of nuclear ribosomal DNA sequences.

    Science.gov (United States)

    Gholami, Akram; Subramaniam, Shweta; Geeta, R; Pandey, Arun K

    2017-06-01

    Alysicarpus Necker ex Desvaux (Fabaceae, Desmodieae) consists of ~30 species that are distributed in tropical and subtropical regions of theworld. In India, the genus is represented by ca. 18 species, ofwhich seven are endemic. Sequences of the nuclear Internal transcribed spacer from38 accessions representing 16 Indian specieswere subjected to phylogenetic analyses. The ITS sequence data strongly support the monophyly of the genus Alysicarpus. Analyses revealed four major well-supported clades within Alysicarpus. Ancestral state reconstructions were done for two morphological characters, namely calyx length in relation to pod (macrocalyx and microcalyx) and pod surface ornamentation (transversely rugose and nonrugose). The present study is the first report on molecular systematics of Indian Alysicarpus.

  19. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  20. Performance analyses of naval ships based on engineering level of simulation at the initial design stage

    Directory of Open Access Journals (Sweden)

    Dong-Hoon Jeong

    2017-07-01

    Full Text Available Naval ships are assigned many and varied missions. Their performance is critical for mission success, and depends on the specifications of the components. This is why performance analyses of naval ships are required at the initial design stage. Since the design and construction of naval ships take a very long time and incurs a huge cost, Modeling and Simulation (M & S is an effective method for performance analyses. Thus in this study, a simulation core is proposed to analyze the performance of naval ships considering their specifications. This simulation core can perform the engineering level of simulations, considering the mathematical models for naval ships, such as maneuvering equations and passive sonar equations. Also, the simulation models of the simulation core follow Discrete EVent system Specification (DEVS and Discrete Time System Specification (DTSS formalisms, so that simulations can progress over discrete events and discrete times. In addition, applying DEVS and DTSS formalisms makes the structure of simulation models flexible and reusable. To verify the applicability of this simulation core, such a simulation core was applied to simulations for the performance analyses of a submarine in an Anti-SUrface Warfare (ASUW mission. These simulations were composed of two scenarios. The first scenario of submarine diving carried out maneuvering performance analysis by analyzing the pitch angle variation and depth variation of the submarine over time. The second scenario of submarine detection carried out detection performance analysis by analyzing how well the sonar of the submarine resolves adjacent targets. The results of these simulations ensure that the simulation core of this study could be applied to the performance analyses of naval ships considering their specifications.

  1. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  2. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    OpenAIRE

    Zhigang Zuo; Shuhong Liu; Yizhang Fan; Yulin Wu

    2014-01-01

    It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were...

  3. IMPROVING CONTROL ROOM DESIGN AND OPERATIONS BASED ON HUMAN FACTORS ANALYSES OR HOW MUCH HUMAN FACTORS UPGRADE IS ENOUGH ?

    Energy Technology Data Exchange (ETDEWEB)

    HIGGINS,J.C.; OHARA,J.M.; ALMEIDA,P.

    2002-09-19

    THE JOSE CABRERA NUCLEAR POWER PLANT IS A ONE LOOP WESTINGHOUSE PRESSURIZED WATER REACTOR. IN THE CONTROL ROOM, THE DISPLAYS AND CONTROLS USED BY OPERATORS FOR THE EMERGENCY OPERATING PROCEDURES ARE DISTRIBUTED ON FRONT AND BACK PANELS. THIS CONFIGURATION CONTRIBUTED TO RISK IN THE PROBABILISTIC SAFETY ASSESSMENT WHERE IMPORTANT OPERATOR ACTIONS ARE REQUIRED. THIS STUDY WAS UNDERTAKEN TO EVALUATE THE IMPACT OF THE DESIGN ON CREW PERFORMANCE AND PLANT SAFETY AND TO DEVELOP DESIGN IMPROVEMENTS.FIVE POTENTIAL EFFECTS WERE IDENTIFIED. THEN NUREG-0711 [1], PROGRAMMATIC, HUMAN FACTORS, ANALYSES WERE CONDUCTED TO SYSTEMATICALLY EVALUATE THE CR-LA YOUT TO DETERMINE IF THERE WAS EVIDENCE OF THE POTENTIAL EFFECTS. THESE ANALYSES INCLUDED OPERATING EXPERIENCE REVIEW, PSA REVIEW, TASK ANALYSES, AND WALKTHROUGH SIMULATIONS. BASED ON THE RESULTS OF THESE ANALYSES, A VARIETY OF CONTROL ROOM MODIFICATIONS WERE IDENTIFIED. FROM THE ALTERNATIVES, A SELECTION WAS MADE THAT PROVIDED A REASONABLEBALANCE BE TWEEN PERFORMANCE, RISK AND ECONOMICS, AND MODIFICATIONS WERE MADE TO THE PLANT.

  4. Theoretical studies on the intermolecular interactions of potentially primordial base-pair analogues

    Czech Academy of Sciences Publication Activity Database

    Šponer, Judit E.; Vázquez-Mayagoitia, Á.; Sumpter, B.G.; Leszczynski, J.; Šponer, Jiří; Otyepka, M.; Banáš, P.; Fuentes-Cabrera, M.

    2010-01-01

    Roč. 16, č. 10 (2010), s. 3057-3065 ISSN 0947-6539 R&D Projects: GA MŠk(CZ) LC06030; GA AV ČR(CZ) 1QS500040581; GA AV ČR(CZ) IAA400040802; GA ČR(CZ) GA203/09/1476 Grant - others:GA MŠk(CZ) LC512; GA AV ČR(CZ) IAA400550701; GA ČR(CZ) GD203/09/H046 Program:LC; IA; GD Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : quantum chemistry * base pairing * origin of life Subject RIV: BO - Biophysics Impact factor: 5.476, year: 2010

  5. Decomposition of atmospheric water content into cluster contributions based on theoretical association equilibrium constants

    International Nuclear Information System (INIS)

    Slanina, Z.

    1987-01-01

    Water vapor is treated as an equilibrium mixture of water clusters (H 2 O)/sub i/ using quantum-chemical evaluation of the equilibrium constants of water associations. The model is adapted to the conditions of atmospheric humidity, and a decomposition algorithm is suggested using the temperature and mass concentration of water as input information and used for a demonstration of evaluation of the water oligomer populations in the Earth's atmosphere. An upper limit of the populations is set up based on the water content in saturated aqueous vapor. It is proved that the cluster population in the saturated water vapor, as well as in the Earth's atmosphere for a typical temperature/humidity profile, increases with increasing temperatures

  6. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    International Nuclear Information System (INIS)

    Starkov, A. S.; Pakhomov, O. V.; Starkov, I. A.

    2013-01-01

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data

  7. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    Science.gov (United States)

    Starkov, A. S.; Pakhomov, O. V.; Starkov, I. A.

    2013-06-01

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data.

  8. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    Energy Technology Data Exchange (ETDEWEB)

    Starkov, A. S., E-mail: starkov@iue.tuwien.ac.at; Pakhomov, O. V. [St. Petersburg National Research Univeristy ITMO, Institute of Refrigeration and Biotechnologies (Russian Federation); Starkov, I. A. [Vienna University of Technology, Institute for Microelectronics (Austria)

    2013-06-15

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data.

  9. Technology-mediated collaborative learning: theoretical contributions based on analysis of educational practice

    Directory of Open Access Journals (Sweden)

    Sonia CASILLAS MARTÍN

    2017-12-01

    Full Text Available Collaborative learning has been a subject of great interest in the context of educational research, giving rise to many studies emphasizing the potential of the collaboration process in student learning, knowledge building, the development of diverse abilities and improved academic performance. Based on a conceptual review and thorough reflection on this topic, this article presents the results of a case study carried out in different schools in the Autonomous Community of Castile y Leon (Spain in an attempt to identify patterns of common action through the implementation of collaborative learning methods mediated by information and communication technologies (ICT. Among the many outcomes of this study, we conclude by highlighting the need to plan collaborative work very carefully, taking advantage of the opportunities offered by ICT as communicative environments where it is possible to construct joint and shared learning. 

  10. Theoretical study and simulation for a nanometer laser based on Gauss–Hermite source expansion

    International Nuclear Information System (INIS)

    Gu, Xiaowei

    2013-01-01

    Recently there has been worldwide interest in constructing a new generation of continuously tunable nanometer lasers for a wide range of scientific applications, including femtosecond science, biological molecules, nanoscience research fields, etc. The high brightness electron beam required by a short wavelength self-amplified spontaneous emission FEL can be reached only with accurate control of the beam dynamics in the facility. Numerical simulation codes are basic tools for designing new nanometer laser devices. We have developed a MATLAB quasi-one-dimensional code based on a reduced model for the FEL. The model uses an envelope description of the transverse dynamics of the laser beam and full longitudinal particle motion. We have optimized the LCLS facility parameters, then given the characteristics of the nanometer laser. (letter)

  11. Theoretical study and simulation for a nanometer laser based on Gauss-Hermite source expansion

    Science.gov (United States)

    Gu, Xiaowei

    2013-07-01

    Recently there has been worldwide interest in constructing a new generation of continuously tunable nanometer lasers for a wide range of scientific applications, including femtosecond science, biological molecules, nanoscience research fields, etc. The high brightness electron beam required by a short wavelength self-amplified spontaneous emission FEL can be reached only with accurate control of the beam dynamics in the facility. Numerical simulation codes are basic tools for designing new nanometer laser devices. We have developed a MATLAB quasi-one-dimensional code based on a reduced model for the FEL. The model uses an envelope description of the transverse dynamics of the laser beam and full longitudinal particle motion. We have optimized the LCLS facility parameters, then given the characteristics of the nanometer laser.

  12. Theoretical and experimental investigation of wear characteristics of aluminum based metal matrix composites using RSM

    International Nuclear Information System (INIS)

    Selvi, S.; Rajasekar, E.

    2015-01-01

    The tribological properties such as wear rate, hardness of the aluminum-fly ash composite synthesized by stir casting were investigated by varying the weight % of fly ash from 5 to 20 with constant weight % of zinc and magnesium metal powder. A mathematical model was developed to predict the wear rate of aluminum metal matrix composites and the adequacy of the model was verified using analysis of variance. Scanning electron microscopy was used for the microstructure analysis which showed a uniform distribution of fly ash in the metal matrix. Energy - dispersive X-ray spectroscopy was used for the elemental analysis or chemical characterization of a sample. The results showed that addition of fly ash to aluminum based metal matrix improved both the mechanical and tribological properties of the composites. The fly ash particles improved the wear resistance of the metal matrix composites because the hardness of the samples taken increased as the fly ash content was increased.

  13. A theoretical study of blue phosphorene nanoribbons based on first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Jiafeng; Si, M. S., E-mail: sims@lzu.edu.cn; Yang, D. Z.; Zhang, Z. Y.; Xue, D. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China)

    2014-08-21

    Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.

  14. Processing considerations with plasma-based ion implantation of polymers: theoretical aspects, limitations, and experimental results

    International Nuclear Information System (INIS)

    Lacoste, A.; Pelletier, J.

    2003-01-01

    Processing of polymers using plasma-based ion implantation techniques (PBII) has general implications in terms of plasma specifications and pulse characteristics. In particular, the different aspects of the processing of polymer layers are discussed as functions of plasma density, pulse duration, and layer characteristics (thickness and permittivity). Clearly, severe limitations (true implantation energy, arcing) may appear for high-density plasmas as well as for long pulse durations, when processing polymer layers with thickness in the mm range. A review of the experimental results of ion implantation in polymeric materials via PBII processing is presented. The experimental results demonstrate the possibility of processing polymer layers with the PBII technique, but with severe limitations resulting from the process itself

  15. Molecular design and theoretical characterization of benzodithiophene based organic photovoltaic materials

    Science.gov (United States)

    Bhattacharya, Labanya; Sahu, Sridhar

    2018-05-01

    Two different oligomers, containing methyl substituted Benzodithiophene (BDT) as donor unit, fluorinated thiophene as the π-bridge unit and two different kinds of acceptors based on fluorinated benzothiadiazole, fluorinated benzoselenadiazole units are designed for bulk heterojunction (BHJ) organic solar cell (OSC). The ground and excited state properties of those donor-π-acceptor-π-donor (D-π-A-π-D) oligomeric configurations are characterized via density functional (DFT) and time dependent density functional theory (TD-DFT). The parameters such as dipole moment (ρ), chemical potential (µ), electronegativity (χ), frontier molecular orbital (FMO) analysis, HOMO-LUMO gap, open circuit voltage (Voc) and driving force (ΔE) are calculated to analyze geometrical, electronic structural, quantum chemical and photovoltaic properties of the compounds. In addition, optical absorption spectra are also presented for the optical characterization of the compounds.

  16. Experimental and theoretical investigation of semiconductor optical amplifier (SOA) based all-optical switches

    DEFF Research Database (Denmark)

    Nielsen, Mads Lønstrup

    2004-01-01

    This thesis analyzes semiconductor optical amplifier (SOA) based all-optical switches experimentally and through numerical simulations. These devices are candidates for optical signal processing functionalities such as wavelength conversion, regeneration, and logic processing in future transparent......, consisting of an SOA and an asymmetric MZI filter, is analyzed in the small-signal regime, and the obtainable modulation bandwidth is expressed analytically. A new optical spectrum approach to small signal analysis is introduced, and is used to assess the bandwidth enhancing effect of different optical...... filters, as well the impact of the filter phase response. Experiments at 40 Gb/s verify the predictions of the small-signal analysis. Wavelength conversion is demonstrated experimentally at 40 Gb/s using a simple filtering-assisted scheme with an ultra-low optical switching energy, and up to 80 Gb...

  17. Experimental and theoretical studies of the thermal behavior of titanium dioxide-SnO2 based composites.

    Science.gov (United States)

    Voga, G P; Coelho, M G; de Lima, G M; Belchior, J C

    2011-04-07

    In this paper we report experimental and theoretical studies concerning the thermal behavior of some organotin-Ti(IV) oxides employed as precursors for TiO(2)/SnO(2) semiconducting based composites, with photocatalytic properties. The organotin-TiO(2) supported materials were obtained by chemical reactions of SnBu(3)Cl (Bu = butyl), TiCl(4) with NH(4)OH in ethanol, in order to impregnate organotin oxide in a TiO(2) matrix. A theoretical model was developed to support experimental procedures. The kinetics parameters: frequency factor (A), activation energy, and reaction order (n) can be estimated through artificial intelligence methods. Genetic algorithm, fuzzy logic, and Petri neural nets were used in order to determine the kinetic parameters as a function of temperature. With this in mind, three precursors were prepared in order to obtain composites with Sn/TiO(2) ratios of 0% (1), 15% (2), and 30% (3) in weight, respectively. The thermal behavior of products (1-3) was studied by thermogravimetric experiments in oxygen.

  18. A theoretical-spectroscopy, ab initio-based study of the electronic ground state of 121SbH3

    International Nuclear Information System (INIS)

    Yurchenko, Sergei N.; Carvajal, Miguel; Yachmenev, Andrey; Thiel, Walter; Jensen, Per

    2010-01-01

    For the stibine isotopologue 121 SbH 3 , we report improved theoretical calculations of the vibrational energies below 8000 cm -1 and simulations of the rovibrational spectrum in the 0-8000 cm -1 region. The calculations are based on a refined ab initio potential energy surface and on a new dipole moment surface obtained at the coupled cluster CCSD(T) level. The theoretical results are compared with the available experimental data in order to validate the ab initio surfaces and the TROVE computational method [Yurchenko SN, Thiel W, Jensen P. J Mol Spectrosc 2007;245:126-40] for calculating rovibrational energies and simulating rovibrational spectra of arbitrary molecules in isolated electronic states. A number of predicted vibrational energies of 121 SbH 3 are provided in order to stimulate new experimental investigations of stibine. The local-mode character of the vibrations in stibine is demonstrated through an analysis of the results in terms of local-mode theory.

  19. Crowdsourcing-Based Geoinformation, Disadvantaged Urbanisation Challenges, Subsaharan Africa: Theoretical Perspectives and Notes

    Directory of Open Access Journals (Sweden)

    Ingwe Richard

    2017-03-01

    Full Text Available Scholars and practitioners concerned with geoinformation, cyber-cartography, development studies, and other subjects increasingly explore crowdsourcing and its huge advantages for development. Some have advocated it for adoption/promotion by government as a means of citizen engagement. The objective of this article is to increase the appreciation of the contribution that crowdsourcing can make towards resolving challenges associated with disadvantaged urbanisation in sub-Saharan Africa (SSA. We review urban challenges of SSA and three practices of crowdsourcing: volunteered geographic information (VGI, Citizen Science (CS, and Participatory Mapping (PM. Then we examine problems associated with the advocacy for government adoption of those practices in SSA. We argue that civil society collaboration with an international governmental organisation (IGO instead of government promises a better way of adopting and promoting them. This suggestion is based on the fact that work related to this strategy is carried out by a global coalition of civil society, the UN-NGLS. This strategy promises a more rapid way of taking advantage of fast-tracking public engagement in the economic region, SSA.

  20. Theoretical studies on membrane-based gas separation using computational fluid dynamics (CFD) of mass transfer

    International Nuclear Information System (INIS)

    Sohrabi, M.R.; Marjani, A.; Davallo, M.; Moradi, S.; Shirazian, S.

    2011-01-01

    A 2D mass transfer model was developed to study carbon dioxide removal by absorption in membrane contactors. The model predicts the steady state absorbent and carbon dioxide concentrations in the membrane by solving the conservation equations. The continuity equations for three sub domains of the membrane contactor involving the tube; membrane and shell were obtained and solved by finite element method (FEM). The model was based on 'non-wetted mode' in which the gas phase filled the membrane pores. Laminar parabolic velocity profile was used for the liquid flow in the tube side; whereas, the gas flow in the shell side was characterized by Happel's free surface model. Axial and radial diffusion transport inside the shell, through the membrane, and within the tube side of the contactor was considered in the mass transfer model. The predictions of percent CO/sub 2/ removal obtained by modeling were compared with the experimental values obtained from literature. They were the experimental results for CO/sub 2/ removal from CO/sub 2//N/sub 2/ gas mixture with amines aqueous solutions as the liquid solvent using polypropylene membrane contactor. The modeling predictions were in good agreement with the experimental values for different values of gas and liquid flow rates. (author)

  1. Concurrent Transmission Based on Channel Quality in Ad Hoc Networks: A Game Theoretic Approach

    Science.gov (United States)

    Chen, Chen; Gao, Xinbo; Li, Xiaoji; Pei, Qingqi

    In this paper, a decentralized concurrent transmission strategy in shared channel in Ad Hoc networks is proposed based on game theory. Firstly, a static concurrent transmissions game is used to determine the candidates for transmitting by channel quality threshold and to maximize the overall throughput with consideration of channel quality variation. To achieve NES (Nash Equilibrium Solution), the selfish behaviors of node to attempt to improve the channel gain unilaterally are evaluated. Therefore, this game allows each node to be distributed and to decide whether to transmit concurrently with others or not depending on NES. Secondly, as there are always some nodes with lower channel gain than NES, which are defined as hunger nodes in this paper, a hunger suppression scheme is proposed by adjusting the price function with interferences reservation and forward relay, to fairly give hunger nodes transmission opportunities. Finally, inspired by stock trading, a dynamic concurrent transmission threshold determination scheme is implemented to make the static game practical. Numerical results show that the proposed scheme is feasible to increase concurrent transmission opportunities for active nodes, and at the same time, the number of hunger nodes is greatly reduced with the least increase of threshold by interferences reservation. Also, the good performance on network goodput of the proposed model can be seen from the results.

  2. Straight talk: HIV prevention for African-American heterosexual men: theoretical bases and intervention design.

    Science.gov (United States)

    Frye, Victoria; Bonner, Sebastian; Williams, Kim; Henny, Kirk; Bond, Keosha; Lucy, Debbie; Cupid, Malik; Smith, Stephen; Koblin, Beryl A

    2012-10-01

    In the United States, racial disparities in HIV/AIDS are stark. Although African Americans comprise an estimated 14% of the U.S. population, they made up 52% of new HIV cases among adults and adolescents diagnosed in 2009. Heterosexual transmission is now the second leading cause of HIV in the United States. African Americans made up a full two-thirds of all heterosexually acquired HIV/AIDS cases between 2005 and 2008. Few demonstrated efficacious HIV prevention interventions designed specifically for adult, African-American heterosexual men exist. Here, we describe the process used to design a theory-based HIV prevention intervention to increase condom use, reduce concurrent partnering, and increase HIV testing among heterosexually active African-American men living in high HIV prevalence areas of New York City. The intervention integrated empowerment, social identity, and rational choices theories and focused on four major content areas: HIV/AIDS testing and education; condom skills training; key relational and behavioral turning points; and masculinity and fatherhood.

  3. Triphenylamine based organic dyes for dye sensitized solar cells: A theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Mohankumar, V.; Pandian, Muthu Senthil; Ramasamy, P., E-mail: ramasamyp@ssn.edu.in [SSN Research Centre, SSN College of Engineering, Chennai-603110, Tamilnadu (India)

    2016-05-23

    The geometry, electronic structure and absorption spectra for newly designed triphenylamine based organic dyes were investigated by density functional theory (DFT) and time dependent density functional theory (TD-DFT) with the Becke 3-Parameter-Lee-Yang-parr(B3LYP) functional, where the 6-31G(d,p) basis set was employed. All calculations were performed using the Gaussian 09 software package. The calculated HOMO and LUMO energies show that charge transfer occurs in the molecule. Ultraviolet–visible (UV–vis) spectrum was simulated by TD-DFT in gas phase. The calculation shows that all of the dyes can potentially be good sensitizers for DSSC. The LUMOs are just above the conduction band of TiO{sub 2} and their HOMOs are under the reduction potential energy of the electrolytes (I{sup −}/I{sub 3}{sup −}) which can facilitate electron transfer from the excited dye to TiO{sub 2} and charge regeneration process after photo oxidation respectively. The simulated absorption spectrum of dyes match with solar spectrum. Frontier molecular orbital results show that among all the three dyes, the “dye 3” can be used as potential sensitizer for DSSC.

  4. The Determination of Physical Activity among Girl Adolescents based on Trans-theoretical model (TTM

    Directory of Open Access Journals (Sweden)

    Masoumeh Alidosti

    2017-07-01

    Full Text Available Background It has been proved that physical activity has positive effects for all people. However, low activity is common among adolescents especially girl teenagers. The present study aimed to determine the condition of physical activity among girl adolescents by use of the stages of change derived from Trantheoretical model (TTM. Materials and Methods: This descriptive- analytical investigation was done (in 2016 and 2017 school year in the first – round girl high school among 324 students studying in state schools of Shahrekord city (Western Iran. They were selected through clustering method. The data were collected by researcher-made questionnaires including demographic characteristics, knowledge construct and the stages of change (pre-contemplation, contemplation, preparation, action and maintenance. The collected data were analyzed by SPSS (version 18.0. Results The mean age of participants was 13.69+ 1.95 years old. The mean score of girls' knowledge about types of physical activity was 53.18± 21.82 (a total of 100 scores, which represents the average level of knowledge among them. The study of physical activity in students based on stages of change showed that 165 ones (43 %, 102 ones (26.6 % and only 22 ones (5.7 % of the studied students were in precontemplation, contemplation and maintenance stages, respectively. There was a significant relation between students' knowledge level and their mothers' age (P

  5. Analysis of earth albedo effect on sun sensor measurements based on theoretical model and mission experience

    Science.gov (United States)

    Brasoveanu, Dan; Sedlak, Joseph

    1998-01-01

    Analysis of flight data from previous missions indicates that anomalous Sun sensor readings could be caused by Earth albedo interference. A previous Sun sensor study presented a detailed mathematical model of this effect. The model can be used to study the effect of both diffusive and specular reflections and to improve Sun angle determination based on perturbed Sun sensor measurements, satellite position, and an approximate knowledge of attitude. The model predicts that diffuse reflected light can cause errors of up to 10 degrees in Coarse Sun Sensor (CSS) measurements and 5 to 10 arc sec in Fine Sun Sensor (FSS) measurements, depending on spacecraft orbit and attitude. The accuracy of these sensors is affected as long as part of the illuminated Earth surface is present in the sensor field of view. Digital Sun Sensors (DSS) respond in a different manner to the Earth albedo interference. Most of the time DSS measurements are not affected, but for brief periods of time the Earth albedo can cause errors which are a multiple of the sensor least significant bit and may exceed one degree. This paper compares model predictions with Tropical Rainfall Measuring Mission (TRMM) CSS measurements in order to validate and refine the model. Methods of reducing and mitigating the impact of Earth albedo are discussed. ne CSS sensor errors are roughly proportional to the Earth albedo coefficient. Photocells that are sensitive only to ultraviolet emissions would reduce the effective Earth albedo by up to a thousand times, virtually eliminating all errors caused by Earth albedo interference.

  6. The Spiritually-Based Organization: A Theoretical Review and its Potential Role in the Third Millennium

    Directory of Open Access Journals (Sweden)

    Anselmo Ferreira Vasconcelos

    Full Text Available This paper examines whether the spiritually-based organization (SBO can be regarded as an imperative for the third millennium. As a result, it draws on the literature review of organizational spirituality, psychology of religion, positive psychology, and spirituality leadership theory in order to support its conclusions, as well as it offers some research propositions. Overall, the evidence gathered throughout this paper suggests that the spiritual paradigm starts to play a key role alongside with the concept of SBOs. Rather, it concludes that these topics can be regarded as authentic imperatives for this millennium. Nonetheless, it argues that is likely to take some time until the spirituality topic may mold, so to speak, organizations' character regarding that spiritual theme is starting to become a noteworthy topic. Furthermore, it argues that the logic that has prevailed on business enterprises has been largely economic, except some honorable initiatives. The findings also indicate that the material paradigm is not suited to deal with germane problems that shape our today's world. Finally, it suggests that the concept of SBO embraces positive changes and, as such, it may be potentially conducive to improving people lives and the planet's health and equilibrium.

  7. Elastic full waveform inversion based on the homogenization method: theoretical framework and 2-D numerical illustrations

    Science.gov (United States)

    Capdeville, Yann; Métivier, Ludovic

    2018-05-01

    Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.

  8. Prediction of stress- and strain-based forming limits of automotive thin sheets by numerical, theoretical and experimental methods

    Science.gov (United States)

    Béres, Gábor; Weltsch, Zoltán; Lukács, Zsolt; Tisza, Miklós

    2018-05-01

    Forming limit is a complex concept of limit values related to the onset of local necking in the sheet metal. In cold sheet metal forming, major and minor limit strains are influenced by the sheet thickness, strain path (deformation history) as well as material parameters and microstructure. Forming Limit Curves are plotted in ɛ1 - ɛ2 coordinate system providing the classic strain-based Forming Limit Diagram (FLD). Using the appropriate constitutive model, the limit strains can be changed into the stress-based Forming Limit Diagram (SFLD), irrespective of the strain path. This study is about the effect of the hardening model parameters on defining of limit stress values during Nakazima tests for automotive dual phase (DP) steels. Five limit strain pairs were specified experimentally with the loading of five different sheet geometries, which performed different strain-paths from pure shear (-2ɛ2=ɛ1) up to biaxial stretching (ɛ2=ɛ1). The former works of Hill, Levy-Tyne and Keeler-Brazier made possible some kind of theoretical strain determination, too. This was followed by the stress calculation based on the experimental and theoretical strain data. Since the n exponent in the Nádai expression is varying with the strain at some DP steels, we applied the least-squares method to fit other hardening model parameters (Ludwik, Voce, Hockett-Sherby) to calculate the stress fields belonging to each limit strains. The results showed that each model parameters could produce some discrepancies between the limit stress states in the range of higher equivalent strains than uniaxial stretching. The calculated hardening models were imported to FE code to extend and validate the results by numerical simulations.

  9. Theoretical investigation of the mechanism of tritiated methane dehydrogenation reaction using nickel-based catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Liang; Li, Jiamao; Deng, Bing; Yang, Yong; Wang, Heyi [Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China); Li, Weiyi [School of Physics and Chemistry, Xihua University, Chengdu 610065 (China); Li, Shuo, E-mail: lishuo@cqut.edu.cn [School of Chemical Engineering, Chongqing University of Technology, Chongqing 400054 (China); Tan, Zhaoyi, E-mail: tanzhaoyi@caep.cn [Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China)

    2015-06-15

    Graphical abstract: - Highlights: • Four-step dehydrogenation of CT{sub 4} catalyzed by Ni to form Ni–C by releasing T{sub 2}. • The process of Ni + CT{sub 4} → NiCT{sub 2} + T{sub 2} is more achievable than that of NiCT{sub 2} → NiC + T{sub 2}. • TNiCT → T{sub 2}NiC step is the RDS with the rate constant of k = 2.8 × 10{sup 13} exp(−313,136/RT). • The hydrogen isotope effect value of k{sub H}/k{sub T} is 2.94, and k{sub D}/k{sub T} is 1.39. • CH{sub 4} and CD{sub 4} dehydrogenations are likely to occur, accompanied by the CT{sub 4} cracking. - Abstract: The mechanism of tritiated methane dehydrogenation reaction catalyzed by nickel-based catalyst was investigated in detail by density functional theory (DFT) at the B3LYP/[6-311++G(d, p), SDD] level. The computational results indicated that the dehydrogenation of tritiated methane is endothermic. The decomposition of tritiated methane catalyzed by Ni to form Ni-based carbon (Ni–C) after a four-step dehydrogenation companied with releasing tritium. After the first and second dehydrogenation steps, Ni + CT{sub 4} formed NiCT{sub 2}. After the third and fourth dehydrogenation steps, NiCT{sub 2} formed NiC. The first and second steps of dehydrogenation occurred on both the singlet and triplet states, and the lowest energy route is Ni + CT{sub 4} → {sup 1}COM → {sup 1}TS1 → {sup 3}IM1 → {sup 3}TS2 → {sup 3}IM2. The third and fourth steps of dehydrogenation occurred on both the singlet and quintet states, and the minimum energy reaction pathway appeared to be IM3 → {sup 1}TS4 → {sup 5}IM4 → {sup 5}TS5 → {sup 5}IM5 → {sup 5}pro + T{sub 2}. The fourth step of dehydrogenation TNiCT → T{sub 2}NiC was the rate-determining step of the entire reaction with the rate constant of k{sub 2} = 2.8 × 10{sup 13} exp(−313,136/RT) (in cm{sup 3} mol{sup −1} s{sup −1}), and its activation energy barrier was calculated to be 51.8 kcal/mol. The Ni-catalyzed CH{sub 4} and CD{sub 4} cracking

  10. Theoretical realization of cluster-assembled hydrogen storage materials based on terminated carbon atomic chains.

    Science.gov (United States)

    Liu, Chun-Sheng; An, Hui; Guo, Ling-Ju; Zeng, Zhi; Ju, Xin

    2011-01-14

    The capacity of carbon atomic chains with different terminations for hydrogen storage is studied using first-principles density functional theory calculations. Unlike the physisorption of H(2) on the H-terminated chain, we show that two Li (Na) atoms each capping one end of the odd- or even-numbered carbon chain can hold ten H(2) molecules with optimal binding energies for room temperature storage. The hybridization of the Li 2p states with the H(2)σ orbitals contributes to the H(2) adsorption. However, the binding mechanism of the H(2) molecules on Na arises only from the polarization interaction between the charged Na atom and the H(2). Interestingly, additional H(2) molecules can be bound to the carbon atoms at the chain ends due to the charge transfer between Li 2s2p (Na 3s) and C 2p states. More importantly, dimerization of these isolated metal-capped chains does not affect the hydrogen binding energy significantly. In addition, a single chain can be stabilized effectively by the C(60) fullerenes termination. With a hydrogen uptake of ∼10 wt.% on Li-coated C(60)-C(n)-C(60) (n = 5, 8), the Li(12)C(60)-C(n)-Li(12)C(60) complex, keeping the number of adsorbed H(2) molecules per Li and stabilizing the dispersion of individual Li atoms, can serve as better building blocks of polymers than the (Li(12)C(60))(2) dimer. These findings suggest a new route to design cluster-assembled hydrogen storage materials based on terminated sp carbon chains.

  11. Theoretical and methodological bases of studying the symbolization of social and political reality in transit societies

    Directory of Open Access Journals (Sweden)

    O. V. Slavina

    2014-10-01

    Full Text Available This article is an attempt to form a methodological foundation to explore the process of symbolic constructioning of reality in the political systems in a state of democratic transition. From the author’s point of view, such transit systems differ with the phenomenal features of transitional type of sign-symbolic context. There are the most significant of them: the confrontation of symbols of old and new, and the formation of public anxiety due to violation of the established values (significant symbols. The result of these processes is the emergence of the conditions for increasing capacity of perception of new symbols (re-symbolization, transmigration of symbolic forms, the appearance of spontaneous symbolic interactions in the community in the form of political protests, rallies, and panic. In this regard, it is necessary to understand the possibilities of the productive management of the collective consciousness in transit period to achieve mental solidarity of concrete society with democratic values. To perform this task, author develops the appropriate tools, which are based on the phenomenological theory, the Schutz’s theory of the constitution of the multiple realities, the philosophy of symbolic forms of E. Cassirer, the theory of social construction of P. Berger and T. Luckmann, as well as Lotman’s semiotic concept. It is concluded that in the collision of alternative symbolic projects of social order it is advisable to resort to controlled symbolization (the production of special symbolic codes of political legitimation. At the same time it is important to understand the mechanisms of auto- symbolization of the society (changing of mass consciousness by virtue of the progressive development of the political culture of people. Careless use of these technologies in the countries with non-consolidated democracy may become a factor of destabilization and formation of the conditions for authoritarian rollback.

  12. Atmospheric radiation environment analyses based-on CCD camera at various mountain altitudes and underground sites

    Directory of Open Access Journals (Sweden)

    Li Cavoli Pierre

    2016-01-01

    Full Text Available The purpose of this paper is to discriminate secondary atmospheric particles and identify muons by measuring the natural radiative environment in atmospheric and underground locations. A CCD camera has been used as a cosmic ray sensor. The Low Noise Underground Laboratory of Rustrel (LSBB, France gives the access to a unique low-noise scientific environment deep enough to ensure the screening from the neutron and proton radiative components. Analyses of the charge levels in pixels of the CCD camera induced by radiation events and cartographies of the charge events versus the hit pixel are proposed.

  13. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  14. Systematics of Plant-Pathogenic and Related Streptomyces Species Based on Phylogenetic Analyses of Multiple Gene Loci

    Science.gov (United States)

    The 10 species of Streptomyces implicated as the etiological agents in scab disease of potatoes or soft rot disease of sweet potatoes are distributed among 7 different phylogenetic clades in analyses based on 16S rRNA gene sequences, but high sequence similarity of this gene among Streptomyces speci...

  15. Identification among morphologically similar Argyreia (Convolvulaceae) based on leaf anatomy and phenetic analyses.

    Science.gov (United States)

    Traiperm, Paweena; Chow, Janene; Nopun, Possathorn; Staples, G; Swangpol, Sasivimon C

    2017-12-01

    The genus Argyreia Lour. is one of the species-rich Asian genera in the family Convolvulaceae. Several species complexes were recognized in which taxon delimitation was imprecise, especially when examining herbarium materials without fully developed open flowers. The main goal of this study is to investigate and describe leaf anatomy for some morphologically similar Argyreia using epidermal peeling, leaf and petiole transverse sections, and scanning electron microscopy. Phenetic analyses including cluster analysis and principal component analysis were used to investigate the similarity of these morpho-types. Anatomical differences observed between the morpho-types include epidermal cell walls and the trichome types on the leaf epidermis. Additional differences in the leaf and petiole transverse sections include the epidermal cell shape of the adaxial leaf blade, the leaf margins, and the petiole transverse sectional outline. The phenogram from cluster analysis using the UPGMA method represented four groups with an R value of 0.87. Moreover, the important quantitative and qualitative leaf anatomical traits of the four groups were confirmed by the principal component analysis of the first two components. The results from phenetic analyses confirmed the anatomical differentiation between the morpho-types. Leaf anatomical features regarded as particularly informative for morpho-type differentiation can be used to supplement macro morphological identification.

  16. [Theoretical and methodological bases for formation of future drivers 'readiness to application of physical-rehabilitation technologies].

    Science.gov (United States)

    Yemets, Anatoliy V; Donchenko, Viktoriya I; Scrinick, Eugenia O

    2018-01-01

    Introduction: Experimental work is aimed at introducing theoretical and methodological foundations for the professional training of the future doctor. The aim: Identify the dynamics of quantitative and qualitative indicators of the readiness of a specialist in medicine. Materials and methods: The article presents the course and results of the experimental work of the conditions of forming the readiness of future specialists in medicine. Results: Our methodical bases for studying the disciplines of the general practice and specialized professional stage of experimental training of future physicians have been worked out. Conclusions: It is developed taking into account the peculiarities of future physician training of materials for various stages of experimental implementation in the educational process of higher medical educational institutions.

  17. Photon path distribution and optical responses of turbid media: theoretical analysis based on the microscopic Beer-Lambert law.

    Science.gov (United States)

    Tsuchiya, Y

    2001-08-01

    A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.

  18. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms

    International Nuclear Information System (INIS)

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y.; Floyd, Carey E.

    2007-01-01

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses

  19. Activity Based Learning in a Freshman Global Business Course: Analyses of Preferences and Demographic Differences

    Science.gov (United States)

    Levine, Mark F.; Guy, Paul W.

    2007-01-01

    The present study investigates pre-business students' reaction to Activity Based Learning in a lower division core required course entitled Introduction to Global Business in the business curriculum at California State University Chico. The study investigates students' preference for Activity Based Learning in comparison to a more traditional…

  20. Variability Abstractions: Trading Precision for Speed in Family-Based Analyses

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Brabrand, Claus; Wasowski, Andrzej

    2015-01-01

    Family-based (lifted) data-flow analysis for Software Product Lines (SPLs) is capable of analyzing all valid products (variants) without generating any of them explicitly. It takes as input only the common code base, which encodes all variants of a SPL, and produces analysis results corresponding...

  1. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  2. Theoretical determination of gamma spectrometry systems efficiency based on probability functions. Application to self-attenuation correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, Manuel, E-mail: manuel.barrera@uca.es [Escuela Superior de Ingeniería, University of Cadiz, Avda, Universidad de Cadiz 10, 11519 Puerto Real, Cadiz (Spain); Suarez-Llorens, Alfonso [Facultad de Ciencias, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan [CEIMAR, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cádiz (Spain)

    2017-05-11

    A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely

  3. Analyses of integrated aircraft cabin contaminant monitoring network based on Kalman consensus filter.

    Science.gov (United States)

    Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang

    2017-11-01

    The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Toxicity testing and chemical analyses of recycled fibre-based paper for food contact

    DEFF Research Database (Denmark)

    Binderup, Mona-Lise; Pedersen, Gitte Alsing; Vinggaard, Anne

    2002-01-01

    of different qualities as food-contact materials and to Perform a preliminary evaluation of their suitability from a safety point of view, and, second, to evaluate the use of different in vitro toxicity tests for screening of paper and board. Paper produced from three different categories of recycled fibres (B...... of the paper products were extracted with either 99% ethanol or water. Potential migrants in the extracts were identified and semiquantified by GC-1R-MS or GC-HRMS. In parallel to the chemical analyses, a battery of four different in vitro toxicity tests with different endpoints were applied to the same...... was less cytotoxic than the extracts prepared from paper made from recycled fibres, and extracts prepared from C was the most cytotoxic. None of the extracts showed mutagenic activity No conclusion about the oestrogenic activity could be made, because all extracts were cytotoxic to the test organism (yeast...

  5. Aroma profile of Garnacha Tintorera-based sweet wines by chromatographic and sensorial analyses.

    Science.gov (United States)

    Noguerol-Pato, R; González-Álvarez, M; González-Barreiro, C; Cancho-Grande, B; Simal-Gándara, J

    2012-10-15

    The aroma profiles obtained of three Garnacha Tintorera-based wines were studied: a base wine, a naturally sweet wine, and a mixture of naturally sweet wine with other sweet wine obtained by fortification with spirits. The aroma fingerprint was traced by GC-MS analysis of volatile compounds and by sensorial analysis of odours and tastes. Within the volatiles compounds, sotolon (73 μg/L) and acetoin (122 μg/L) were the two main compounds found in naturally sweet wine. With regards to the odorant series, those most dominant for Garnacha Tintorera base wine were floral, fruity and spicy. Instead, the most marked odorant series affected by off-vine drying of the grapes were floral, caramelized and vegetal-wood. Finally, odorant series affected by the switch-off of alcoholic fermentation with ethanol 96% (v/v) fit for human consumption followed by oak barrel aging were caramelized and vegetal-wood. A partial least square test (PLS-2) was used to detect correlations between sets of sensory data (those obtained with mouth and nose) with the ultimate aim of improving our current understanding of the flavour of Garnacha Tintorera red wines, both base and sweet. Based on the sensory dataset analysis, the descriptors with the highest weight for separating base and sweet wines from Garnacha Tintorera were sweetness, dried fruit and caramel (for sweet wines) vs. bitterness, astringency and geranium (for base wines). Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    Science.gov (United States)

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.

  7. Population decay time and distribution of exciton states analyzed by rate equations based on theoretical phononic and electron-collisional rate coefficients

    Science.gov (United States)

    Oki, Kensuke; Ma, Bei; Ishitani, Yoshihiro

    2017-11-01

    Population distributions and transition fluxes of the A exciton in bulk GaN are theoretically analyzed using rate equations of states of the principal quantum number n up to 5 and the continuum. These rate equations consist of the terms of radiative, electron-collisional, and phononic processes. The dependence of the rate coefficients on temperature is revealed on the basis of the collisional-radiative model of hydrogen plasma for the electron-collisional processes and theoretical formulation using Fermi's "golden rule" for the phononic processes. The respective effects of the variations in electron, exciton, and lattice temperatures are exhibited. This analysis is a base of the discussion on nonthermal equilibrium states of carrier-exciton-phonon dynamics. It is found that the exciton dissociation is enhanced even below 150 K mainly by the increase in the lattice temperature. When the thermal-equilibrium temperature increases, the population fluxes between the states of n >1 and the continuum become more dominant. Below 20 K, the severe deviation from the Saha-Boltzmann distribution occurs owing to the interband excitation flux being higher than the excitation flux from the 1 S state. The population decay time of the 1 S state at 300 K is more than ten times longer than the recombination lifetime of excitons with kinetic energy but without the upper levels (n >1 and the continuum). This phenomenon is caused by a shift of population distribution to the upper levels. This phonon-exciton-radiation model gives insights into the limitations of conventional analyses such as the ABC model, the Arrhenius plot, the two-level model (n =1 and the continuum), and the neglect of the upper levels.

  8. A Game Theoretical Approach Based Bidding Strategy Optimization for Power Producers in Power Markets with Renewable Electricity

    Directory of Open Access Journals (Sweden)

    Yi Tang

    2017-05-01

    Full Text Available In a competitive electricity market with substantial involvement of renewable electricity, maximizing profits by optimizing bidding strategies is crucial to different power producers including conventional power plants and renewable ones. This paper proposes a game-theoretic bidding optimization method based on bi-level programming, where power producers are at the upper level and utility companies are at the lower level. The competition among the multiple power producers is formulated as a non-cooperative game in which bidding curves are their strategies, while uniform clearing pricing is considered for utility companies represented by an independent system operator. Consequently, based on the formulated game model, the bidding strategies for power producers are optimized for the day-ahead market and the intraday market with considering the properties of renewable energy; and the clearing pricing for the utility companies, with respect to the power quantity from different power producers, is optimized simultaneously. Furthermore, a distributed algorithm is provided to search the solution of the generalized Nash equilibrium. Finally, simulation results were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game-based bi-level optimization approach.

  9. Analysing Test-Takers’ Views on a Computer-Based Speaking Test

    Directory of Open Access Journals (Sweden)

    Marian Amengual-Pizarro

    2017-11-01

    Full Text Available This study examines test-takers’ views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was administered to 80 test-takers who took the APTIS speaking test at the Universidad de Alcalá in April 2016. Results reveal that examinees believe computer-based tests provide a valid measure of oral competence in English and are considered to be an adequate method for the assessment of speaking. Interestingly, the data suggest that personal characteristics of test-takers seem to play a key role in deciding upon the most suitable and reliable delivery mode.

  10. Phylogenetic tree based on complete genomes using fractal and correlation analyses without sequence alignment

    Directory of Open Access Journals (Sweden)

    Zu-Guo Yu

    2006-06-01

    Full Text Available The complete genomes of living organisms have provided much information on their phylogenetic relationships. Similarly, the complete genomes of chloroplasts have helped resolve the evolution of this organelle in photosynthetic eukaryotes. In this review, we describe two algorithms to construct phylogenetic trees based on the theories of fractals and dynamic language using complete genomes. These algorithms were developed by our research group in the past few years. Our distance-based phylogenetic tree of 109 prokaryotes and eukaryotes agrees with the biologists' "tree of life" based on the 16S-like rRNA genes in a majority of basic branchings and most lower taxa. Our phylogenetic analysis also shows that the chloroplast genomes are separated into two major clades corresponding to chlorophytes s.l. and rhodophytes s.l. The interrelationships among the chloroplasts are largely in agreement with the current understanding on chloroplast evolution.

  11. Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM

    Science.gov (United States)

    Cajka, R.; Vaskova, J.; Vasek, J.

    2018-04-01

    For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.

  12. Sediment Characteristics of Mergui Basin, Andaman Sea based on Multi-proxy Analyses

    Directory of Open Access Journals (Sweden)

    Rina Zuraida

    2018-02-01

    Full Text Available This paper presents the characteristics of sediment from core BS-36 (6°55.85’ S and 96°7.48’ E, 1147.1 m water depth that was acquired in the Mergui Basin, Andaman Sea. The analyses involved megascopic description, core scanning by multi-sensor core logger, and carbonate content measurement. The purpose of this study is to determine the physical and chemical characteristics of sediment to infer the depositional environment. The results show that this core can be divided into 5 lithologic units that represent various environmental conditions. The sedimentation of the bottom part, Units V and IV were inferred to be deposited in suboxic to anoxic bottom condition combined with high productivity and low precipitation. Unit III was deposited during high precipitation and oxic condition due to ocean ventilation. In the upper part, Units II and I occurred during higher precipitation, higher carbonate production and suboxic to anoxic condition. Keywords: sediment characteristics, Mergui Basin, Andaman Sea, suboxic, anoxic, oxic, carbonate content

  13. Revised age of deglaciation of Lake Emma based on new radiocarbon and macrofossil analyses

    Science.gov (United States)

    Elias, S.A.; Carrara, P.E.; Toolin, L.J.; Jull, A.J.T.

    1991-01-01

    Previous radiocarbon ages of detrital moss fragments in basal organic sediments of Lake Emma indicated that extensive deglaciation of the San Juan Mountains occurred prior to 14,900 yr B.P. (Carrara et al., 1984). Paleoecological analyses of insect and plant macrofossils from these basal sediments cast doubt on the reliability of the radiocarbon ages. Subsequent accelerator radiocarbon dates of insect fossils and wood fragments indicate an early Holocene age, rather than a late Pleistocene age, for the basal sediments of Lake Emma. These new radiocarbon ages suggest that by at least 10,000 yr B.P. deglaciation of the San Juan Mountains was complete. The insect and plant macrofossils from the basal organic sediments indicate a higher-than-present treeline during the early Holocene. The insect assemblages consisted of about 30% bark beetles, which contrasts markedly with the composition of insects from modern lake sediments and modern specimens collected in the Lake Emma cirque, in which bark beetles comprise only about 3% of the assemblages. In addition, in the fossil assemblages there were a number of flightless insect species (not subject to upslope transport by wind) indicative of coniferous forest environments. These insects were likewise absent in the modern assemblage. ?? 1991.

  14. Is autoimmunology a discipline of its own? A big data-based bibliometric and scientometric analyses.

    Science.gov (United States)

    Watad, Abdulla; Bragazzi, Nicola Luigi; Adawi, Mohammad; Amital, Howard; Kivity, Shaye; Mahroum, Naim; Blank, Miri; Shoenfeld, Yehuda

    2017-06-01

    Autoimmunology is a super-specialty of immunology specifically dealing with autoimmune disorders. To assess the extant literature concerning autoimmune disorders, bibliometric and scientometric analyses (namely, research topics/keywords co-occurrence, journal co-citation, citations, and scientific output trends - both crude and normalized, authors network, leading authors, countries, and organizations analysis) were carried out using open-source software, namely, VOSviewer and SciCurve. A corpus of 169,519 articles containing the keyword "autoimmunity" was utilized, selecting PubMed/MEDLINE as bibliographic thesaurus. Journals specifically devoted to autoimmune disorders were six and covered approximately 4.15% of the entire scientific production. Compared with all the corpus (from 1946 on), these specialized journals have been established relatively few decades ago. Top countries were the United States, Japan, Germany, United Kingdom, Italy, China, France, Canada, Australia, and Israel. Trending topics are represented by the role of microRNAs (miRNAs) in the ethiopathogenesis of autoimmune disorders, contributions of genetics and of epigenetic modifications, role of vitamins, management during pregnancy and the impact of gender. New subsets of immune cells have been extensively investigated, with a focus on interleukin production and release and on Th17 cells. Autoimmunology is emerging as a new discipline within immunology, with its own bibliometric properties, an identified scientific community and specifically devoted journals.

  15. Shielding analysis method applied to nuclear ship 'MUTSU' and its evaluation based on experimental analyses

    International Nuclear Information System (INIS)

    Yamaji, Akio; Miyakoshi, Jun-ichi; Iwao, Yoshiaki; Tsubosaka, Akira; Saito, Tetsuo; Fujii, Takayoshi; Okumura, Yoshihiro; Suzuoki, Zenro; Kawakita, Takashi.

    1984-01-01

    Procedures of shielding analysis are described which were used for the shielding modification design of the Nuclear Ship ''MUTSU''. The calculations of the radiation distribution on board were made using Sn codes ANISN and TWOTRAN, a point kernel code QAD and a Monte Carlo code MORSE. The accuracies of these calculations were investigated through the analysis of various shielding experiments: the shield tank experiment of the Nuclear Ship ''Otto Hahn'', the shielding mock-up experiment for ''MUTSU'' performed in JRR-4, the shielding benchmark experiment using the 16 N radiation facility of AERE Harwell and the shielding effect experiment of the ship structure performed in the training ship ''Shintoku-Maru''. The values calculated by the ANISN agree with the data measured at ''Otto Hahn'' within a factor of 2 for fast neutrons and within a factor of 3 for epithermal and thermal neutrons. The γ-ray dose rates calculated by the QAD agree with the measured values within 30% for the analysis of the experiment in JRR-4. The design values for ''MUTSU'' were determined in consequence of these experimental analyses. (author)

  16. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  17. The theoretical study of passive and active optical devices via planewave based transfer (scattering) matrix method and other approaches

    Energy Technology Data Exchange (ETDEWEB)

    Zhuo, Ye [Iowa State Univ., Ames, IA (United States)

    2011-01-01

    In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed in advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this

  18. Data analyses and modelling for risk based monitoring of mycotoxins in animal feed

    NARCIS (Netherlands)

    Ine van der Fels-Klerx, H.J.; Adamse, Paulien; Punt, Ans; Asselt, van Esther D.

    2018-01-01

    Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study

  19. Handbook of methods for risk-based analyses of technical specifications

    International Nuclear Information System (INIS)

    Samanta, P.K.; Kim, I.S.; Mankamo, T.; Vesely, W.E.

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations

  20. Analysing a Web-Based E-Commerce Learning Community: A Case Study in Brazil.

    Science.gov (United States)

    Joia, Luiz Antonio

    2002-01-01

    Demonstrates the use of a Web-based participative virtual learning environment for graduate students in Brazil enrolled in an electronic commerce course in a Masters in Business Administration program. Discusses learning communities; computer-supported collaborative work and collaborative learning; influences on student participation; the role of…

  1. Handbook of methods for risk-based analyses of technical specifications

    Energy Technology Data Exchange (ETDEWEB)

    Samanta, P.K.; Kim, I.S. [Brookhaven National Lab., Upton, NY (United States); Mankamo, T. [Avaplan Oy, Espoo (Finland); Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States)

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations.

  2. Group analyses of connectivity-based cortical parcellation using repeated k-means clustering

    NARCIS (Netherlands)

    Nanetti, Luca; Cerliani, Leonardo; Gazzola, Valeria; Renken, Remco; Keysers, Christian

    2009-01-01

    K-means clustering has become a popular tool for connectivity-based cortical segmentation using Diffusion Weighted Imaging (DWI) data. A sometimes ignored issue is, however, that the output of the algorithm depends on the initial placement of starting points, and that different sets of starting

  3. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Science.gov (United States)

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  4. Analyses of Receptive and Productive Korean EFL Vocabulary: Computer-Based Vocabulary Learning Program

    Science.gov (United States)

    Kim, Scott Sungki

    2013-01-01

    The present research study investigated the effects of 8 versions of a computer-based vocabulary learning program on receptive and productive knowledge levels of college students. The participants were 106 male and 103 female Korean EFL students from Kyungsung University and Kwandong University in Korea. Students who participated in versions of…

  5. Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database

    International Nuclear Information System (INIS)

    Lam, Chio; Zhou, Wenxing

    2016-01-01

    This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.

  6. White matter disruption in moderate/severe pediatric traumatic brain injury: Advanced tract-based analyses

    Directory of Open Access Journals (Sweden)

    Emily L. Dennis

    2015-01-01

    Full Text Available Traumatic brain injury (TBI is the leading cause of death and disability in children and can lead to a wide range of impairments. Brain imaging methods such as DTI (diffusion tensor imaging are uniquely sensitive to the white matter (WM damage that is common in TBI. However, higher-level analyses using tractography are complicated by the damage and decreased FA (fractional anisotropy characteristic of TBI, which can result in premature tract endings. We used the newly developed autoMATE (automated multi-atlas tract extraction method to identify differences in WM integrity. 63 pediatric patients aged 8–19 years with moderate/severe TBI were examined with cross sectional scanning at one or two time points after injury: a post-acute assessment 1–5 months post-injury and a chronic assessment 13–19 months post-injury. A battery of cognitive function tests was performed in the same time periods. 56 children were examined in the first phase, 28 TBI patients and 28 healthy controls. In the second phase 34 children were studied, 17 TBI patients and 17 controls (27 participants completed both post-acute and chronic phases. We did not find any significant group differences in the post-acute phase. Chronically, we found extensive group differences, mainly for mean and radial diffusivity (MD and RD. In the chronic phase, we found higher MD and RD across a wide range of WM. Additionally, we found correlations between these WM integrity measures and cognitive deficits. This suggests a distributed pattern of WM disruption that continues over the first year following a TBI in children.

  7. Geology of Southern Guinevere Planitia, Venus, based on analyses of Goldstone radar data

    International Nuclear Information System (INIS)

    Arvidson, R.E.; Plaut, J.J.; Jurgens, R.F.; Saunders, R.S.; Slade, M.A.

    1989-01-01

    The ensemble of 41 backscatter images of Venus acquired by the S Band (12.6 cm) Goldstone radar system covers approx. 35 million km and includes the equatorial portion of Guinevere Planitia, Navka Planitia, Heng-O Chasma, and Tinatin Planitia, and parts of Devana Chasma and Phoebe Regio. The images and associated altimetry data combine relatively high spatial resolution (1 to 10 km) with small incidence angles (less than 10 deg) for regions not covered by either Venera Orbiter or Arecibo radar data. Systematic analyses of the Goldstone data show that: (1) Volcanic plains dominate, including groups of small volcanic constructs, radar bright flows on a NW-SE arm of Phoebe Regio and on Ushas Mons and circular volcano-tectonic depressions; (2) Some of the regions imaged by Goldstone have high radar cross sections, including the flows on Ushas Mons and the NW-SE arm of Phoebe Regio, and several other unnamed hills, ridged terrains, and plains areas; (3) A 1000 km diameter multiringed structure is observed and appears to have a morphology not observed in Venera data (The northern section corresponds to Heng-O Chasma); (4) A 150 km wide, 2 km deep, 1400 km long rift valley with upturned flanks is located on the western flank of Phoebe Regio and extends into Devana Chasma; (5) A number of structures can be discerned in the Goldstone data, mainly trending NW-SE and NE-SW, directions similar to those discerned in Pioneer-Venus topography throughout the equatorial region; and (6) The abundance of circular and impact features is similar to the plains global average defined from Venera and Arecibo data, implying that the terrain imaged by Goldstone has typical crater retention ages, measured in hundreds of millions of years. The rate of resurfacing is less than or equal to 4 km/Ga

  8. Intra-specific genetic relationship analyses of Elaeagnus angustifolia based on RP-HPLC biochemical markers

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Elaeagnus angustifolia Linn. has various ecological, medicinal and economical uses. An approach was established using RP-HPLC (reversed-phase high-performance liquid chromatography) to classify and analyse the intra-specific genetic relationships of seventeen populations of E. angustifolia, collected from the Xinjiang areas of China. Chromatograms of alcohol-soluble proteins produced by seventeen populations ofE. angustifolia, were compared. Each chromatogram of alcohol-soluble proteins came from a single seed of one wild plant only. The results showed that when using a Waters Delta Pak. C18, 5 μm particle size reversed phase column (150 mm×3.9 mm), a linear gradient of 25%~60% solvent B with flow rate of 1 ml/min and run time of 67 min, the chromatography yielded optimum separation ofE. angustifolia alcohol-soluble proteins. Representative peaks in each population were chosen according to peak area and occurrence in every seed. The converted data on the elution peaks of each population were different and could be used to represent those populations. GSC (genetic similarity coefficients) of 41% to 62% showed a medium degree of genetic diversity among the populations in these eco-areas. Cluster analysis showed that the seventeen populations ofE. angustifolia could be divided into six clusters at the GSC=0.535 level and indicated the general and unique biochemical markers of these clusters. We suggest that E. angustifolia distribution in these eco-areas could be classified into six variable species. RP-HPLC was shown to be a rapid, repeatable and reliable method for E. angustifolia classification and identification and for analysis of genetic diversity.

  9. Improving the safety of a body composition analyser based on the PGNAA method

    Energy Technology Data Exchange (ETDEWEB)

    Miri-Hakimabad, Hashem; Izadi-Najafabadi, Reza; Vejdani-Noghreiyan, Alireza; Panjeh, Hamed [FUM Radiation Detection And Measurement Laboratory, Ferdowsi University of Mashhad (Iran, Islamic Republic of)

    2007-12-15

    The {sup 252}Cf radioisotope and {sup 241}Am-Be are intense neutron emitters that are readily encapsulated in compact, portable and sealed sources. Some features such as high flux of neutron emission and reliable neutron spectrum of these sources make them suitable for the prompt gamma neutron activation analysis (PGNAA) method. The PGNAA method can be used in medicine for neutron radiography and body chemical composition analysis. {sup 252}Cf and {sup 241}Am-Be sources generate not only neutrons but also are intense gamma emitters. Furthermore, the sample in medical treatments is a human body, so it may be exposed to the bombardments of these gamma-rays. Moreover, accumulations of these high-rate gamma-rays in the detector volume cause simultaneous pulses that can be piled up and distort the spectra in the region of interest (ROI). In order to remove these disadvantages in a practical way without being concerned about losing the thermal neutron flux, a gamma-ray filter made of Pb must be employed. The paper suggests a relatively safe body chemical composition analyser (BCCA) machine that uses a spherical Pb shield, enclosing the neutron source. Gamma-ray shielding effects and the optimum radius of the spherical Pb shield have been investigated, using the MCNP-4C code, and compared with the unfiltered case, the bare source. Finally, experimental results demonstrate that an optimised gamma-ray shield for the neutron source in a BCCA can reduce effectively the risk of exposure to the {sup 252}Cf and {sup 241}Am-Be sources.

  10. Ecogeographical associations between climate and human body composition: analyses based on anthropometry and skinfolds.

    Science.gov (United States)

    Wells, Jonathan C K

    2012-02-01

    In the 19th century, two "ecogeographical rules" were proposed hypothesizing associations of climate with mammalian body size and proportions. Data on human body weight and relative leg length support these rules; however, it is unknown whether such associations are attributable to lean tissue (the heat-producing component) or fat (energy stores). Data on weight, height, and two skinfold thickness were obtained from the literature for 137 nonindustrialized populations, providing 145 male and 115 female individual samples. A variety of indices of adiposity and lean mass were analyzed. Preliminary analyses indicated secular increases in skinfolds in men but not women, and associations of age and height with lean mass in both sexes. Decreasing annual temperature was associated with increasing body mass index (BMI), and increasing triceps but not subscapular skinfold. After adjusting for skinfolds, decreasing temperature remained associated with increasing BMI. These results indicate that colder environments favor both greater peripheral energy stores, and greater lean mass. Contrasting results for triceps and subscapular skinfolds might be due to adaptive strategies either constraining central adiposity in cold environments to reduce cardiovascular risk, or favoring central adiposity in warmer environments to maintain energetic support of the immune system. Polynesian populations were analyzed separately and contradicted all of the climate trends, indicating support for the hypothesis that they are cold-adapted despite occupying a tropical region. It is unclear whether such associations emerge through natural selection or through trans-generational and life-course plasticity. These findings nevertheless aid understanding of the wide variability in human physique and adiposity. Copyright © 2011 Wiley Periodicals, Inc.

  11. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  12. A bead-based western for high-throughput cellular signal transduction analyses

    Science.gov (United States)

    Treindl, Fridolin; Ruprecht, Benjamin; Beiter, Yvonne; Schultz, Silke; Döttinger, Anette; Staebler, Annette; Joos, Thomas O.; Kling, Simon; Poetz, Oliver; Fehm, Tanja; Neubauer, Hans; Kuster, Bernhard; Templin, Markus F.

    2016-01-01

    Dissecting cellular signalling requires the analysis of large number of proteins. The DigiWest approach we describe here transfers the western blot to a bead-based microarray platform. By combining gel-based protein separation with immobilization on microspheres, hundreds of replicas of the initial blot are created, thus enabling the comprehensive analysis of limited material, such as cells collected by laser capture microdissection, and extending traditional western blotting to reach proteomic scales. The combination of molecular weight resolution, sensitivity and signal linearity on an automated platform enables the rapid quantification of hundreds of specific proteins and protein modifications in complex samples. This high-throughput western blot approach allowed us to identify and characterize alterations in cellular signal transduction that occur during the development of resistance to the kinase inhibitor Lapatinib, revealing major changes in the activation state of Ephrin-mediated signalling and a central role for p53-controlled processes. PMID:27659302

  13. Critical experiments analyses by using 70 energy group library based on ENDF/B-VI

    Energy Technology Data Exchange (ETDEWEB)

    Tahara, Yoshihisa; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center; Huria, H.C.; Ouisloumen, M.

    1998-03-01

    The newly developed 70-group library has been validated by comparing kinf from a continuous energy Monte-Carlo code MCNP and two dimensional spectrum calculation code PHOENIX-CP. The code employs Discrete Angular Flux Method based on Collision Probability. The library has been also validated against a large number of critical experiments and numerical benchmarks for assemblies with MOX and Gd fuels. (author)

  14. Data Analyses and Modelling for Risk Based Monitoring of Mycotoxins in Animal Feed

    Directory of Open Access Journals (Sweden)

    H.J. (Ine van der Fels-Klerx

    2018-01-01

    Full Text Available Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study aimed to prioritize feed products in the Netherlands for deoxynivalenol and aflatoxin B1 monitoring. Historical mycotoxin monitoring results from the period 2007–2016 were combined with data from other sources. Based on occurrence, groundnuts had high priority for aflatoxin B1 monitoring; some feed materials (maize and maize products and several oil seed products and complete/complementary feed excluding dairy cattle and young animals had medium priority; and all other animal feeds and feed materials had low priority. For deoxynivalenol, maize by-products had a high priority, complete and complementary feed for pigs had a medium priority and all other feed and feed materials a low priority. Also including health consequence estimations showed that feed materials that ranked highest for aflatoxin B1 included sunflower seed and palmkernel expeller/extracts and maize. For deoxynivalenol, maize products were ranked highest, followed by various small grain cereals (products; all other feed materials were of lower concern. Results of this study have proven to be useful in setting up the annual risk based control program for mycotoxins in animal feed and feed materials.

  15. Failure probability analyses for PWSCC in Ni-based alloy welds

    International Nuclear Information System (INIS)

    Udagawa, Makoto; Katsuyama, Jinya; Onizawa, Kunio; Li, Yinsheng

    2015-01-01

    A number of cracks due to primary water stress corrosion cracking (PWSCC) in pressurized water reactors and Ni-based alloy stress corrosion cracking (NiSCC) in boiling water reactors have been detected around Ni-based alloy welds. The causes of crack initiation and growth due to stress corrosion cracking include weld residual stress, operating stress, the materials, and the environment. We have developed the analysis code PASCAL-NP for calculating the failure probability and assessment of the structural integrity of cracked components on the basis of probabilistic fracture mechanics (PFM) considering PWSCC and NiSCC. This PFM analysis code has functions for calculating the incubation time of PWSCC and NiSCC crack initiation, evaluation of crack growth behavior considering certain crack location and orientation patterns, and evaluation of failure behavior near Ni-based alloy welds due to PWSCC and NiSCC in a probabilistic manner. Herein, actual plants affected by PWSCC have been analyzed using PASCAL-NP. Failure probabilities calculated by PASCAL-NP are in reasonable agreement with the detection data. Furthermore, useful knowledge related to leakage due to PWSCC was obtained through parametric studies using this code

  16. Financial and Performance Analyses of Microcontroller Based Solar-Powered Autorickshaw for a Developing Country

    Directory of Open Access Journals (Sweden)

    Abu Raihan Mohammad Siddique

    2016-01-01

    Full Text Available This paper presents a case study to examine the economic viability and performance analysis of a microcontroller based solar powered battery operated autorickshaw (m-SBAR, for the developing countries, which is compared with different types of rickshaws such as pedal rickshaw (PR, battery operated autorickshaw (BAR, and solar-powered battery operated autorickshaw (SBAR, available in Bangladesh. The BAR consists of a rickshaw structure, a battery bank, a battery charge controller, a DC motor driver, and a DC motor whereas the proposed m-SBAR contains additional components like solar panel and microcontroller based DC motor driver. The complete design considered the local radiation data and load profile of the proposed m-SBAR. The Levelized Cost of Energy (LCOE analysis, Net Present Worth, payback periods, and Benefit-to-Cost Ratio methods have been used to evaluate the financial feasibility and sensitivity analysis of m-SBAR, grid-powered BAR, and PR. The numerical analysis reveals that LCOE and Benefit-to-Cost Ratio of the proposed m-SBAR are lower compared to the grid-powered BAR. It has also been found that microcontroller based DC motor control circuit reduces battery discharge rate, improves battery life, and controls motor speed efficiency.

  17. Data Analyses and Modelling for Risk Based Monitoring of Mycotoxins in Animal Feed

    Science.gov (United States)

    van der Fels-Klerx, H.J. (Ine); Adamse, Paulien; Punt, Ans; van Asselt, Esther D.

    2018-01-01

    Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study aimed to prioritize feed products in the Netherlands for deoxynivalenol and aflatoxin B1 monitoring. Historical mycotoxin monitoring results from the period 2007–2016 were combined with data from other sources. Based on occurrence, groundnuts had high priority for aflatoxin B1 monitoring; some feed materials (maize and maize products and several oil seed products) and complete/complementary feed excluding dairy cattle and young animals had medium priority; and all other animal feeds and feed materials had low priority. For deoxynivalenol, maize by-products had a high priority, complete and complementary feed for pigs had a medium priority and all other feed and feed materials a low priority. Also including health consequence estimations showed that feed materials that ranked highest for aflatoxin B1 included sunflower seed and palmkernel expeller/extracts and maize. For deoxynivalenol, maize products were ranked highest, followed by various small grain cereals (products); all other feed materials were of lower concern. Results of this study have proven to be useful in setting up the annual risk based control program for mycotoxins in animal feed and feed materials. PMID:29373559

  18. Detection System of HTTP DDoS Attacks in a Cloud Environment Based on Information Theoretic Entropy and Random Forest

    Directory of Open Access Journals (Sweden)

    Mohamed Idhammad

    2018-01-01

    Full Text Available Cloud Computing services are often delivered through HTTP protocol. This facilitates access to services and reduces costs for both providers and end-users. However, this increases the vulnerabilities of the Cloud services face to HTTP DDoS attacks. HTTP request methods are often used to address web servers’ vulnerabilities and create multiple scenarios of HTTP DDoS attack such as Low and Slow or Flooding attacks. Existing HTTP DDoS detection systems are challenged by the big amounts of network traffic generated by these attacks, low detection accuracy, and high false positive rates. In this paper we present a detection system of HTTP DDoS attacks in a Cloud environment based on Information Theoretic Entropy and Random Forest ensemble learning algorithm. A time-based sliding window algorithm is used to estimate the entropy of the network header features of the incoming network traffic. When the estimated entropy exceeds its normal range the preprocessing and the classification tasks are triggered. To assess the proposed approach various experiments were performed on the CIDDS-001 public dataset. The proposed approach achieves satisfactory results with an accuracy of 99.54%, a FPR of 0.4%, and a running time of 18.5s.

  19. A theoretical investigation on the neutral Cu(I) phosphorescent complexes with azole-based and phosphine mixed ligand

    Science.gov (United States)

    Ding, Xiao-Li; Shen, Lu; Zou, Lu-Yi; Ma, Ming-Shuo; Ren, Ai-Min

    2018-04-01

    A theoretical study on a series of neutral heteroleptic Cu(I) complexes with different azole-pyridine-based N^N ligands has been presented to get insight into the effect of various nitrogen atoms in the azole ring on photophysical properties. The results reveal that the highest occupied molecular orbital (HOMO) levels and the emission wavelengths of these complexes are mainly governed by the nitrogen atom number in azole ring. With the increasing number of nitrogen atom , the electron density distribution of HOMO gradually extend from the N^N ligand to the whole molecule, meanwhile, the improved contribution from Cu(d) orbits in HOMO results in an effective mixing of various charge transfermodes, and hence, the fast radiative decay(kr) and the slow non-radiative decay rate(knr) are achieved. The photoluminescence quantum yield (PLQY) show an apparent dependence on the nitrogen atom number in the five-membered nitrogen heterocycles. However, the increasing number of nitrogen atoms is not necessary for increasing PLQY. The complex 3 with 1,2,4-triazole-pyridine-based N^N ligands is considered to be a potential emitter with high phosphorescence efficiency. Finally, we hope that our investigations will contribute to systematical understanding and guiding for material molecular engineering.

  20. Theoretical Design and First Test in Laboratory of a Composite Visual Servo-Based Target Spray Robotic System

    Directory of Open Access Journals (Sweden)

    Dongjie Zhao

    2016-01-01

    Full Text Available In order to spray onto the canopy of interval planting crop, an approach of using a target spray robot with a composite vision servo system based on monocular scene vision and monocular eye-in-hand vision was proposed. Scene camera was used to roughly locate target crop, and then the image-processing methods for background segmentation, crop canopy centroid extraction, and 3D positioning were studied. Eye-in-hand camera was used to precisely determine spray position of each crop. Based on the center and area of 2D minimum-enclosing-circle (MEC of crop canopy, a method to calculate spray position and spray time was determined. In addition, locating algorithm for the MEC center in nozzle reference frame and the hand-eye calibration matrix were studied. The processing of a mechanical arm guiding nozzle to spray was divided into three stages: reset, alignment, and hovering spray, and servo method of each stage was investigated. For preliminary verification of the theoretical studies on the approach, a simplified experimental prototype containing one spray mechanical arm was built and some performance tests were carried out under controlled environment in laboratory. The results showed that the prototype could achieve the effect of “spraying while moving and accurately spraying on target.”

  1. Assessing an organizational culture instrument based on the Competing Values Framework: Exploratory and confirmatory factor analyses

    Science.gov (United States)

    Helfrich, Christian D; Li, Yu-Fang; Mohr, David C; Meterko, Mark; Sales, Anne E

    2007-01-01

    Background The Competing Values Framework (CVF) has been widely used in health services research to assess organizational culture as a predictor of quality improvement implementation, employee and patient satisfaction, and team functioning, among other outcomes. CVF instruments generally are presented as well-validated with reliable aggregated subscales. However, only one study in the health sector has been conducted for the express purpose of validation, and that study population was limited to hospital managers from a single geographic locale. Methods We used exploratory and confirmatory factor analyses to examine the underlying structure of data from a CVF instrument. We analyzed cross-sectional data from a work environment survey conducted in the Veterans Health Administration (VHA). The study population comprised all staff in non-supervisory positions. The survey included 14 items adapted from a popular CVF instrument, which measures organizational culture according to four subscales: hierarchical, entrepreneurial, team, and rational. Results Data from 71,776 non-supervisory employees (approximate response rate 51%) from 168 VHA facilities were used in this analysis. Internal consistency of the subscales was moderate to strong (α = 0.68 to 0.85). However, the entrepreneurial, team, and rational subscales had higher correlations across subscales than within, indicating poor divergent properties. Exploratory factor analysis revealed two factors, comprising the ten items from the entrepreneurial, team, and rational subscales loading on the first factor, and two items from the hierarchical subscale loading on the second factor, along with one item from the rational subscale that cross-loaded on both factors. Results from confirmatory factor analysis suggested that the two-subscale solution provides a more parsimonious fit to the data as compared to the original four-subscale model. Conclusion This study suggests that there may be problems applying conventional

  2. Analyses of Crime Patterns in NIBRS Data Based on a Novel Graph Theory Clustering Method: Virginia as a Case Study

    Directory of Open Access Journals (Sweden)

    Peixin Zhao

    2014-01-01

    Full Text Available This paper suggests a novel clustering method for analyzing the National Incident-Based Reporting System (NIBRS data, which include the determination of correlation of different crime types, the development of a likelihood index for crimes to occur in a jurisdiction, and the clustering of jurisdictions based on crime type. The method was tested by using the 2005 assault data from 121 jurisdictions in Virginia as a test case. The analyses of these data show that some different crime types are correlated and some different crime parameters are correlated with different crime types. The analyses also show that certain jurisdictions within Virginia share certain crime patterns. This information assists with constructing a pattern for a specific crime type and can be used to determine whether a jurisdiction may be more likely to see this type of crime occur in their area.

  3. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  4. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    International Nuclear Information System (INIS)

    Joe, Yang Hee; Cho, Sung Gook

    2003-01-01

    This paper briefly introduces an improved method for evaluating seismic fragilities of components of nuclear power plants in Korea. Engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are also discussed in this paper. For the purpose of evaluating the effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures, several cases of comparative studies have been performed. The study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities. (author)

  5. Augmentation of French grunt diet description using combined visual and DNA-based analyses

    Science.gov (United States)

    Hargrove, John S.; Parkyn, Daryl C.; Murie, Debra J.; Demopoulos, Amanda W.J.; Austin, James D.

    2012-01-01

    Trophic linkages within a coral-reef ecosystem may be difficult to discern in fish species that reside on, but do not forage on, coral reefs. Furthermore, dietary analysis of fish can be difficult in situations where prey is thoroughly macerated, resulting in many visually unrecognisable food items. The present study examined whether the inclusion of a DNA-based method could improve the identification of prey consumed by French grunt, Haemulon flavolineatum, a reef fish that possesses pharyngeal teeth and forages on soft-bodied prey items. Visual analysis indicated that crustaceans were most abundant numerically (38.9%), followed by sipunculans (31.0%) and polychaete worms (5.2%), with a substantial number of unidentified prey (12.7%). For the subset of prey with both visual and molecular data, there was a marked reduction in the number of unidentified sipunculans (visual – 31.1%, combined &ndash 4.4%), unidentified crustaceans (visual &ndash 15.6%, combined &ndash 6.7%), and unidentified taxa (visual &ndash 11.1%, combined &ndash 0.0%). Utilising results from both methodologies resulted in an increased number of prey placed at the family level (visual &ndash 6, combined &ndash 33) and species level (visual &ndash 0, combined &ndash 4). Although more costly than visual analysis alone, our study demonstrated the feasibility of DNA-based identification of visually unidentifiable prey in the stomach contents of fish.

  6. VALUE-BASED MEDICINE AND OPHTHALMOLOGY: AN APPRAISAL OF COST-UTILITY ANALYSES

    Science.gov (United States)

    Brown, Gary C; Brown, Melissa M; Sharma, Sanjay; Brown, Heidi; Smithen, Lindsay; Leeser, David B; Beauchamp, George

    2004-01-01

    ABSTRACT Purpose To ascertain the extent to which ophthalmologic interventions have been evaluated in value-based medicine format. Methods Retrospective literature review. Papers in the healthcare literature utilizing cost-utility analysis were reviewed by researchers at the Center for Value-Based Medicine, Flourtown, Pennsylvania. A literature review of papers addressing the cost-utility analysis of ophthalmologic procedures in the United States over a 12-year period from 1992 to 2003 was undertaken using the National Library of Medicine and EMBASE databases. The cost-utility of ophthalmologic interventions in inflation-adjusted (real) year 2003 US dollars expended per quality-adjusted life-year ($/QALY) was ascertained in all instances. Results A total of 19 papers were found, including a total of 25 interventions. The median cost-utility of ophthalmologic interventions was $5,219/QALY, with a range from $746/QALY to $6.5 million/QALY. Conclusions The majority of ophthalmologic interventions are especially cost-effective by conventional standards. This is because of the substantial value that ophthalmologic interventions confer to patients with eye diseases for the resources expended. PMID:15747756

  7. Group analyses of connectivity-based cortical parcellation using repeated k-means clustering.

    Science.gov (United States)

    Nanetti, Luca; Cerliani, Leonardo; Gazzola, Valeria; Renken, Remco; Keysers, Christian

    2009-10-01

    K-means clustering has become a popular tool for connectivity-based cortical segmentation using Diffusion Weighted Imaging (DWI) data. A sometimes ignored issue is, however, that the output of the algorithm depends on the initial placement of starting points, and that different sets of starting points therefore could lead to different solutions. In this study we explore this issue. We apply k-means clustering a thousand times to the same DWI dataset collected in 10 individuals to segment two brain regions: the SMA-preSMA on the medial wall, and the insula. At the level of single subjects, we found that in both brain regions, repeatedly applying k-means indeed often leads to a variety of rather different cortical based parcellations. By assessing the similarity and frequency of these different solutions, we show that approximately 256 k-means repetitions are needed to accurately estimate the distribution of possible solutions. Using nonparametric group statistics, we then propose a method to employ the variability of clustering solutions to assess the reliability with which certain voxels can be attributed to a particular cluster. In addition, we show that the proportion of voxels that can be attributed significantly to either cluster in the SMA and preSMA is relatively higher than in the insula and discuss how this difference may relate to differences in the anatomy of these regions.

  8. A MULTI-AGENT BASED SOCIAL CRM FRAMEWORK FOR EXTRACTING AND ANALYSING OPINIONS

    Directory of Open Access Journals (Sweden)

    ABDELAZIZ EL FAZZIKI

    2017-08-01

    Full Text Available Social media provide a wide space for people from around the world to communicate, share knowledge and personal experiences. They increasingly become an important data source for opinion mining and sentiment analysis, thanks to shared comments and reviews about products and services. And companies are showing a growing interest to harness their potential, in order to support setting up marketing strategies. Despite the importance of sentiment analysis in decision making, there is a lack of social intelligence integration at the level of customer relationship management systems. Thus, social customer relationship management (SCRM systems have become an interesting research area. However, they need deep analytic techniques to transform the large amount of data “Big Data” into actionable insights. Such systems also require an advanced modelling and data processing methods, and must consider the emerging paradigm related to proactive systems. In this paper, we propose an agent based social framework that extracts and consolidates the reviews expressed via social media, in order to help enterprises know more about customers’ opinions toward a particular product or service. To illustrate our approach, we present the case study of Twitter reviews that we use to extract opinions and sentiment about a set of products using SentiGem API. Data extraction, analysis and storage are performed using a framework based on Hadoop MapReduce and HBase.

  9. Are decisions using cost-utility analyses robust to choice of SF-36/SF-12 preference-based algorithm?

    Directory of Open Access Journals (Sweden)

    Walton Surrey M

    2005-03-01

    Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the

  10. Molecular genotyping of Colletotrichum species based on arbitrarily primed PCR, A + T-Rich DNA, and nuclear DNA analyses

    Science.gov (United States)

    Freeman, S.; Pham, M.; Rodriguez, R.J.

    1993-01-01

    Molecular genotyping of Colletotrichum species based on arbitrarily primed PCR, A + T-rich DNA, and nuclear DNA analyses. Experimental Mycology 17, 309-322. Isolates of Colletotrichum were grouped into 10 separate species based on arbitrarily primed PCR (ap-PCR), A + T-rich DNA (AT-DNA) and nuclear DNA banding patterns. In general, the grouping of Colletotrichum isolates by these molecular approaches corresponded to that done by classical taxonomic identification, however, some exceptions were observed. PCR amplification of genomic DNA using four different primers allowed for reliable differentiation between isolates of the 10 species. HaeIII digestion patterns of AT-DNA also distinguished between species of Colletotrichum by generating species-specific band patterns. In addition, hybridization of the repetitive DNA element (GcpR1) to genomic DNA identified a unique set of Pst 1-digested nuclear DNA fragments in each of the 10 species of Colletotrichum tested. Multiple isolates of C. acutatum, C. coccodes, C. fragariae, C. lindemuthianum, C. magna, C. orbiculare, C. graminicola from maize, and C. graminicola from sorghum showed 86-100% intraspecies similarity based on ap-PCR and AT-DNA analyses. Interspecies similarity determined by ap-PCR and AT-DNA analyses varied between 0 and 33%. Three distinct banding patterns were detected in isolates of C. gloeosporioides from strawberry. Similarly, three different banding patterns were observed among isolates of C. musae from diseased banana.

  11. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    Energy Technology Data Exchange (ETDEWEB)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it [Department of Architecture, Built Environment and Construction Engineering (ABC), Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  12. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    International Nuclear Information System (INIS)

    Milani, Gabriele; Valente, Marco

    2014-01-01

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures

  13. Non-localization and localization ROC analyses using clinically based scoring

    Science.gov (United States)

    Paquerault, Sophie; Samuelson, Frank W.; Myers, Kyle J.; Smith, Robert C.

    2009-02-01

    We are investigating the potential for differences in study conclusions when assessing the estimated impact of a computer-aided detection (CAD) system on readers' performance. The data utilized in this investigation were derived from a multi-reader multi-case observer study involving one hundred mammographic background images to which fixed-size and fixed-intensity Gaussian signals were added, generating a low- and high-intensity signal sets. The study setting allowed CAD assessment in two situations: when CAD sensitivity was 1) superior or 2) lower than the average reader. Seven readers were asked to review each set in the unaided and CAD-aided reading modes, mark and rate their findings. Using this data, we studied the effect on study conclusion of three clinically-based receiver operating characteristic (ROC) scoring definitions. These scoring definitions included both location-specific and non-location-specific rules. The results showed agreement in the estimated impact of CAD on the overall reader performance. In the study setting where CAD sensitivity is superior to the average reader, the mean difference in AUC between the CAD-aided read and unaided read was 0.049 (95%CIs: -0.027; 0.130) for the image scoring definition that is based on non-location-specific rules, and 0.104 (95%CIs: 0.036; 0.174) and 0.090 (95%CIs: 0.031; 0.155) for image scoring definitions that are based on location-specific rules. The increases in AUC were statistically significant for the location-specific scoring definitions. It was further observed that the variance on these estimates was reduced when using the location-specific scoring definitions compared to that using a non-location-specific scoring definition. In the study setting where CAD sensitivity is equivalent or lower than the average reader, the mean differences in AUC are slightly above 0.01 for all image scoring definitions. These increases in AUC were not statistical significant for any of the image scoring definitions

  14. Drive-based recording analyses at >800 Gfc/in2 using shingled recording

    International Nuclear Information System (INIS)

    William Cross, R.; Montemorra, Michael

    2012-01-01

    Since the introduction of perpendicular recording, conventional perpendicular scaling has enabled the hard disk drive industry to deliver products ranging from ∼130 to well over 500 Gb/in 2 in a little over 4 years. The incredible areal density growth spurt enabled by perpendicular recording is now endangered by an inability to effectively balance writeability with erasure effects at the system level. Shingled magnetic recording (SMR) offers an effective means to continue perpendicular areal density growth using conventional heads and tuned media designs. The use of specially designed edge-write head structures (also known as 'corner writers') should further increase the AD gain potential for shingled recording. In this paper, we will demonstrate the drive-based recording performance characteristics of a shingled recording system at areal densities in excess of 800 Gb/in 2 using a conventional head. Using a production drive base, developmental heads/media and a number of sophisticated analytical routines, we have studied the recording performance of a shingled magnetic recording subsystem. Our observations confirm excellent writeability in excess of 400 ktpi and a perpendicular system with acceptable noise balance, especially at extreme ID and OD skews where the benefits of SMR are quite pronounced. We believe that this demonstration illustrates that SMR is not only capable of productization, but is likely the path of least resistance toward production drive areal density closer to 1 Tb/in 2 and beyond. - Research highlights: → Drive-based recording demonstrations at 805 Gf/in 2 has been demonstrated using both 95 and 65 mm drive platforms at roughly 430 ktpi and 1.87 Mfci. → Limiting factors for shingled recording include side reading, which is dominated by the reader crosstrack skirt profile, MT10 being a representative metric. → Media jitter and associated DC media SNR further limit areal density, dominated by crosstrack transition curvature, downtrack

  15. Identification of provenance rocks based on EPMA analyses of heavy minerals

    Science.gov (United States)

    Shimizu, M.; Sano, N.; Ueki, T.; Yonaga, Y.; Yasue, K. I.; Masakazu, N.

    2017-12-01

    Information on mountain building is significant in the field of geological disposal of high-level radioactive waste, because this affects long-term stability in groundwater flow system. Provenance analysis is one of effective approaches for understanding building process of mountains. Chemical compositions of heavy minerals, as well as their chronological data, can be an index for identification of provenance rocks. The accurate identification requires the measurement of as many grains as possible. In order to achieve an efficient provenance analysis, we developed a method for quick identification of heavy minerals using an Electron Probe Micro Analyzer (EPMA). In this method, heavy mineral grains extracted from a sample were aligned on a glass slide and mounted in a resin. Concentration of 28 elements was measured for 300-500 grains per sample using EPMA. To measure as many grains as possible, we prioritized swiftness of measurement over precision, configuring measurement time of about 3.5 minutes for each grain. Identification of heavy minerals was based on their chemical composition. We developed a Microsoft® Excel® spread sheet input criteria of mineral identification using a typical range of chemical compositions for each mineral. The grains of 110 wt.% total were rejected. The criteria of mineral identification were revised through the comparison between mineral identification by optical microscopy and chemical compositions of grains classified as "unknown minerals". Provenance rocks can be identified based on abundance ratio of identified minerals. If no significant difference of the abundance ratio was found among source rocks, chemical composition of specific minerals was used as another index. This method was applied to the sediments of some regions in Japan where provenance rocks had lithological variations but similar formation ages. Consequently, the provenance rocks were identified based on chemical compositions of heavy minerals resistant to

  16. Fuel assemblies mechanical behaviour improvements based on design changes and loading patterns computational analyses

    International Nuclear Information System (INIS)

    Marin, J.; Aullo, M.; Gutierrez, E.

    2001-01-01

    In the past few years, incomplete RCCA insertion events (IRI) have been taking place at some nuclear plants. Large guide thimble distortion caused by high compressive loads together with the irradiation induced material creep and growth, is considered as the primary cause of those events. This disturbing phenomenon is worsened when some fuel assemblies are deformed to the extent that they push the neighbouring fuel assemblies and the distortion is transmitted along the core. In order to better understand this mechanism, ENUSA has developed a methodology based on finite element core simulation to enable assessments on the propensity of a given core loading pattern to propagate the distortion along the core. At the same time, the core loading pattern could be decided interacting with nuclear design to obtain the optimum response under both, nuclear and mechanical point of views, with the objective of progressively attenuating the core distortion. (author)

  17. [The genotype-based haplotype relative risk and transmission disequilibrium test analyses of familial febrile convulsions].

    Science.gov (United States)

    Qi, Y; Wu, X; Guo, Z; Zhang, J; Pan, H; Li, M; Bao, X; Peng, J; Zou, L; Lin, Q

    1999-10-01

    To confirm the linkage of familial febrile convulsions to the short arm of chromosome 6(6p) or the long arm of chromosome 8(8q). The authors finished genotyping of Pst I locus on the coding region of heat shock protein (HSP) 70, 5'untranslated region of HSP70-1, 3' untranslated region of HSP70-2, D8S84 and D8S85. The data were processed by the genotype-based haplotype relative risk(GHRR) and transmission disequilibrium test(TDT) methods in PPAP. Some signs of association and disequilibrium between D8S85 and FC were shown by GHRR and TDT. A suspect linkage of familial febrile convulsions to the long arm of chromosome 8 has been proposed.

  18. Scenario-based analyses of energy system development and its environmental implications in Thailand

    International Nuclear Information System (INIS)

    Shrestha, Ram M.; Malla, Sunil; Liyanage, Migara H.

    2007-01-01

    Thailand is one of the fastest growing energy-intensive economies in Southeast Asia. To formulate sound energy policies in the country, it is important to understand the impact of energy use on the environment over the long-period. This study examines energy system development and its associated greenhouse gas and local air pollutant emissions under four scenarios in Thailand through the year 2050. The four scenarios involve different growth paths for economy, population, energy efficiency and penetration of renewable energy technologies. The paper assesses the changes in primary energy supply mix, sector-wise final energy demand, energy import dependency and CO 2 , SO 2 and NO x emissions under four scenarios using end-use based Asia-Pacific Integrated Assessment Model (AIM/Enduse) of Thailand. (author)

  19. Dugong: a Docker image, based on Ubuntu Linux, focused on reproducibility and replicability for bioinformatics analyses.

    Science.gov (United States)

    Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R

    2018-02-01

    This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.nunes@ufabc.edu.br. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. Space nuclear-power reactor design based on combined neutronic and thermal-fluid analyses

    International Nuclear Information System (INIS)

    Koenig, D.R.; Gido, R.G.; Brandon, D.I.

    1985-01-01

    The design and performance analysis of a space nuclear-power system requires sophisticated analytical capabilities such as those developed during the nuclear rocket propulsion (Rover) program. In particular, optimizing the size of a space nuclear reactor for a given power level requires satisfying the conflicting requirements of nuclear criticality and heat removal. The optimization involves the determination of the coolant void (volume) fraction for which the reactor diameter is a minimum and temperature and structural limits are satisfied. A minimum exists because the critical diameter increases with increasing void fraction, whereas the reactor diameter needed to remove a specified power decreases with void fraction. The purpose of this presentation is to describe and demonstrate our analytical capability for the determination of minimum reactor size. The analysis is based on combining neutronic criticality calculations with OPTION-code thermal-fluid calculations

  1. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  2. Actual situation analyses of rat-run traffic on community streets based on car probe data

    Science.gov (United States)

    Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao

    2017-10-01

    Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.

  3. Pattern Analyses Reveal Separate Experience-Based Fear Memories in the Human Right Amygdala.

    Science.gov (United States)

    Braem, Senne; De Houwer, Jan; Demanet, Jelle; Yuen, Kenneth S L; Kalisch, Raffael; Brass, Marcel

    2017-08-23

    Learning fear via the experience of contingencies between a conditioned stimulus (CS) and an aversive unconditioned stimulus (US) is often assumed to be fundamentally different from learning fear via instructions. An open question is whether fear-related brain areas respond differently to experienced CS-US contingencies than to merely instructed CS-US contingencies. Here, we contrasted two experimental conditions where subjects were instructed to expect the same CS-US contingencies while only one condition was characterized by prior experience with the CS-US contingency. Using multivoxel pattern analysis of fMRI data, we found CS-related neural activation patterns in the right amygdala (but not in other fear-related regions) that dissociated between whether a CS-US contingency had been instructed and experienced versus merely instructed. A second experiment further corroborated this finding by showing a category-independent neural response to instructed and experienced, but not merely instructed, CS presentations in the human right amygdala. Together, these findings are in line with previous studies showing that verbal fear instructions have a strong impact on both brain and behavior. However, even in the face of fear instructions, the human right amygdala still shows a separable neural pattern response to experience-based fear contingencies. SIGNIFICANCE STATEMENT In our study, we addressed a fundamental problem of the science of human fear learning and memory, namely whether fear learning via experience in humans relies on a neural pathway that can be separated from fear learning via verbal information. Using two new procedures and recent advances in the analysis of brain imaging data, we localized purely experience-based fear processing and memory in the right amygdala, thereby making a direct link between human and animal research. Copyright © 2017 the authors 0270-6474/17/378116-15$15.00/0.

  4. [Sustainable Implementation of Evidence-Based Programmes in Health Promotion: A Theoretical Framework and Concept of Interactive Knowledge to Action].

    Science.gov (United States)

    Rütten, A; Wolff, A; Streber, A

    2016-03-01

    This article discusses 2 current issues in the field of public health research: (i) transfer of scientific knowledge into practice and (ii) sustainable implementation of good practice projects. It also supports integration of scientific and practice-based evidence production. Furthermore, it supports utilisation of interactive models that transcend deductive approaches to the process of knowledge transfer. Existing theoretical approaches, pilot studies and thoughtful conceptual considerations are incorporated into a framework showing the interplay of science, politics and prevention practice, which fosters a more sustainable implementation of health promotion programmes. The framework depicts 4 key processes of interaction between science and prevention practice: interactive knowledge to action, capacity building, programme adaptation and adaptation of the implementation context. Ensuring sustainability of health promotion programmes requires a concentrated process of integrating scientific and practice-based evidence production in the context of implementation. Central to the integration process is the approach of interactive knowledge to action, which especially benefits from capacity building processes that facilitate participation and systematic interaction between relevant stakeholders. Intense cooperation also induces a dynamic interaction between multiple actors and components such as health promotion programmes, target groups, relevant organisations and social, cultural and political contexts. The reciprocal adaptation of programmes and key components of the implementation context can foster effectiveness and sustainability of programmes. Sustainable implementation of evidence-based health promotion programmes requires alternatives to recent deductive models of knowledge transfer. Interactive approaches prove to be promising alternatives. Simultaneously, they change the responsibilities of science, policy and public health practice. Existing boundaries

  5. Model-based analyses to compare health and economic outcomes of cancer control: inclusion of disparities.

    Science.gov (United States)

    Goldie, Sue J; Daniels, Norman

    2011-09-21

    Disease simulation models of the health and economic consequences of different prevention and treatment strategies can guide policy decisions about cancer control. However, models that also consider health disparities can identify strategies that improve both population health and its equitable distribution. We devised a typology of cancer disparities that considers types of inequalities among black, white, and Hispanic populations across different cancers and characteristics important for near-term policy discussions. We illustrated the typology in the specific example of cervical cancer using an existing disease simulation model calibrated to clinical, epidemiological, and cost data for the United States. We calculated average reduction in cancer incidence overall and for black, white, and Hispanic women under five different prevention strategies (Strategies A1, A2, A3, B, and C) and estimated average costs and life expectancy per woman, and the cost-effectiveness ratio for each strategy. Strategies that may provide greater aggregate health benefit than existing options may also exacerbate disparities. Combining human papillomavirus vaccination (Strategy A2) with current cervical cancer screening patterns (Strategy A1) resulted in an average reduction of 69% in cancer incidence overall but a 71.6% reduction for white women, 68.3% for black women, and 63.9% for Hispanic women. Other strategies targeting risk-based screening to racial and ethnic minorities reduced disparities among racial subgroups and resulted in more equitable distribution of benefits among subgroups (reduction in cervical cancer incidence, white vs. Hispanic women, 69.7% vs. 70.1%). Strategies that employ targeted risk-based screening and new screening algorithms, with or without vaccination (Strategies B and C), provide excellent value. The most effective strategy (Strategy C) had a cost-effectiveness ratio of $28,200 per year of life saved when compared with the same strategy without

  6. Fatigue Crack Propagation Under Variable Amplitude Loading Analyses Based on Plastic Energy Approach

    Directory of Open Access Journals (Sweden)

    Sofiane Maachou

    2014-04-01

    Full Text Available Plasticity effects at the crack tip had been recognized as “motor” of crack propagation, the growth of cracks is related to the existence of a crack tip plastic zone, whose formation and intensification is accompanied by energy dissipation. In the actual state of knowledge fatigue crack propagation is modeled using crack closure concept. The fatigue crack growth behavior under constant amplitude and variable amplitude loading of the aluminum alloy 2024 T351 are analyzed using in terms energy parameters. In the case of VAL (variable amplitude loading tests, the evolution of the hysteretic energy dissipated per block is shown similar with that observed under constant amplitude loading. A linear relationship between the crack growth rate and the hysteretic energy dissipated per block is obtained at high growth rates. For lower growth rates values, the relationship between crack growth rate and hysteretic energy dissipated per block can represented by a power law. In this paper, an analysis of fatigue crack propagation under variable amplitude loading based on energetic approach is proposed.

  7. A method of mounting multiple otoliths for beam-based microchemical analyses

    Science.gov (United States)

    Donohoe, C.J.; Zimmerman, C.E.

    2010-01-01

    Beam-based analytical methods are widely used to measure the concentrations of elements and isotopes in otoliths. These methods usually require that otoliths be individually mounted and prepared to properly expose the desired growth region to the analytical beam. Most analytical instruments, such as LA-ICPMS and ion and electron microprobes, have sample holders that will accept only one to six slides or mounts at a time. We describe a method of mounting otoliths that allows for easy transfer of many otoliths to a single mount after they have been prepared. Such an approach increases the number of otoliths that can be analyzed in a single session by reducing the need open the sample chamber to exchange slides-a particularly time consuming step on instruments that operate under vacuum. For ion and electron microprobes, the method also greatly reduces the number of slides that must be coated with an electrical conductor prior to analysis. In this method, a narrow strip of cover glass is first glued at one end to a standard microscope slide. The otolith is then mounted in thermoplastic resin on the opposite, free end of the strip. The otolith can then be ground and flipped, if needed, by reheating the mounting medium. After otolith preparation is complete, the cover glass is cut with a scribe to free the otolith and up to 20 small otoliths can be arranged on a single petrographic slide. ?? 2010 The Author(s).

  8. Beam transient analyses of Accelerator Driven Subcritical Reactors based on neutron transport method

    Energy Technology Data Exchange (ETDEWEB)

    He, Mingtao; Wu, Hongchun [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Wang, Kunpeng [Nuclear and Radiation Safety Center, PO Box 8088, Beijing 100082 (China); Li, Xunzhao; Zhou, Shengcheng [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China)

    2015-12-15

    Highlights: • A transport-based kinetics code for Accelerator Driven Subcritical Reactors is developed. • The performance of different kinetics methods adapted to the ADSR is investigated. • The impacts of neutronic parameters deteriorating with fuel depletion are investigated. - Abstract: The Accelerator Driven Subcritical Reactor (ADSR) is almost external source dominated since there is no additional reactivity control mechanism in most designs. This paper focuses on beam-induced transients with an in-house developed dynamic analysis code. The performance of different kinetics methods adapted to the ADSR is investigated, including the point kinetics approximation and space–time kinetics methods. Then, the transient responds of beam trip and beam overpower are calculated and analyzed for an ADSR design dedicated for minor actinides transmutation. The impacts of some safety-related neutronics parameters deteriorating with fuel depletion are also investigated. The results show that the power distribution varying with burnup leads to large differences in temperature responds during transients, while the impacts of kinetic parameters and feedback coefficients are not very obvious. Classification: Core physic.

  9. Three-dimensional finite element model for flexible pavement analyses based field modulus measurements

    International Nuclear Information System (INIS)

    Lacey, G.; Thenoux, G.; Rodriguez-Roa, F.

    2008-01-01

    In accordance with the present development of empirical-mechanistic tools, this paper presents an alternative to traditional analysis methods for flexible pavements using a three-dimensional finite element formulation based on a liner-elastic perfectly-plastic Drucker-Pager model for granular soil layers and a linear-elastic stress-strain law for the asphalt layer. From the sensitivity analysis performed, it was found that variations of +-4 degree in the internal friction angle of granular soil layers did not significantly affect the analyzed pavement response. On the other hand, a null dilation angle is conservatively proposed for design purposes. The use of a Light Falling Weight Deflectometer is also proposed as an effective and practical tool for on-site elastic modulus determination of granular soil layers. However, the stiffness value obtained from the tested layer should be corrected when the measured peak deflection and the peak force do not occur at the same time. In addition, some practical observations are given to achieve successful field measurements. The importance of using a 3D FE analysis to predict the maximum tensile strain at the bottom of the asphalt layer (related to pavement fatigue) and the maximum vertical comprehensive strain transmitted to the top of the granular soil layers (related to rutting) is also shown. (author)

  10. [Health risks in different living circumstances of mothers. Analyses based on a population study].

    Science.gov (United States)

    Sperlich, Stefanie

    2014-12-01

    The objective of this study was to determine the living circumstances ('Lebenslagen') in mothers which are associated with elevated health risks. Data were derived from a cross-sectional population based sample of German women (n = 3129) with underage children. By means of a two-step cluster analysis ten different maternal living circumstances were assessed which proved to be distinct with respect to indicators of socioeconomic position, employment status and family-related factors. Out of the ten living circumstances, one could be attributed to higher socioeconomic status (SES), while five were assigned to a middle SES and four to a lower SES. In line with previous findings, mothers with a high SES predominantly showed the best health while mothers with a low SES tended to be at higher health risk with respect to subjective health, mental health (anxiety and depression), obesity and smoking. However, there were important health differences between the different living circumstances within the middle and lower SES. In addition, varying health risks were found among different living circumstances of single mothers, pointing to the significance of family and job-related living conditions in establishing health risks. With this exploratory analysis strategy small-scale living conditions could be detected which were associated with specific health risks. This approach seemed particularly suitable to provide a more precise definition of target groups for health promotion. The findings encourage a more exrensive application of the concept of living conditions in medical sociology research as well as health monitoring.

  11. Pareto frontier analyses based decision making tool for transportation of hazardous waste

    International Nuclear Information System (INIS)

    Das, Arup; Mazumder, T.N.; Gupta, A.K.

    2012-01-01

    Highlights: ► Posteriori method using multi-objective approach to solve bi-objective routing problem. ► System optimization (with multiple source–destination pairs) in a capacity constrained network using non-dominated sorting. ► Tools like cost elasticity and angle based focus used to analyze Pareto frontier to aid stakeholders make informed decisions. ► A real life case study of Kolkata Metropolitan Area to explain the workability of the model. - Abstract: Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology.

  12. UniPrimer: A Web-Based Primer Design Tool for Comparative Analyses of Primate Genomes

    Directory of Open Access Journals (Sweden)

    Nomin Batnyam

    2012-01-01

    Full Text Available Whole genome sequences of various primates have been released due to advanced DNA-sequencing technology. A combination of computational data mining and the polymerase chain reaction (PCR assay to validate the data is an excellent method for conducting comparative genomics. Thus, designing primers for PCR is an essential procedure for a comparative analysis of primate genomes. Here, we developed and introduced UniPrimer for use in those studies. UniPrimer is a web-based tool that designs PCR- and DNA-sequencing primers. It compares the sequences from six different primates (human, chimpanzee, gorilla, orangutan, gibbon, and rhesus macaque and designs primers on the conserved region across species. UniPrimer is linked to RepeatMasker, Primer3Plus, and OligoCalc softwares to produce primers with high accuracy and UCSC In-Silico PCR to confirm whether the designed primers work. To test the performance of UniPrimer, we designed primers on sample sequences using UniPrimer and manually designed primers for the same sequences. The comparison of the two processes showed that UniPrimer was more effective than manual work in terms of saving time and reducing errors.

  13. Analyses of Large Coal-Based SOFCs for High Power Stack Block Development

    Energy Technology Data Exchange (ETDEWEB)

    Recknagle, Kurtis P; Koeppel, Brian J

    2010-10-01

    This report summarizes the numerical modeling and analytical efforts for SOFC stack development performed for the coal-based SOFC program. The stack modeling activities began in 2004, but this report focuses on the most relevant results obtained since August 2008. This includes the latter half of Phase-I and all of Phase-II activities under technical guidance of VPS and FCE. The models developed to predict the thermal-flow-electrochemical behaviors and thermal-mechanical responses of generic planar stacks and towers are described. The effects of cell geometry, fuel gas composition, on-cell reforming, operating conditions, cell performance, seal leak, voltage degradation, boundary conditions, and stack height are studied. The modeling activities to evaluate and achieve technical targets for large stack blocks are described, and results from the latest thermal-fluid-electrochemical and structural models are summarized. Modeling results for stack modifications such as scale-up and component thickness reduction to realize cost reduction are presented. Supporting modeling activities in the areas of cell fabrication and loss of contact are also described.

  14. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  15. Population genomic analyses based on 1 million SNPs in commercial egg layers.

    Directory of Open Access Journals (Sweden)

    Mahmood Gholami

    Full Text Available Identifying signatures of selection can provide valuable insight about the genes or genomic regions that are or have been under selective pressure, which can lead to a better understanding of genotype-phenotype relationships. A common strategy for selection signature detection is to compare samples from several populations and search for genomic regions with outstanding genetic differentiation. Wright's fixation index, FST, is a useful index for evaluation of genetic differentiation between populations. The aim of this study was to detect selective signatures between different chicken groups based on SNP-wise FST calculation. A total of 96 individuals of three commercial layer breeds and 14 non-commercial fancy breeds were genotyped with three different 600K SNP-chips. After filtering a total of 1 million SNPs were available for FST calculation. Averages of FST values were calculated for overlapping windows. Comparisons of these were then conducted between commercial egg layers and non-commercial fancy breeds, as well as between white egg layers and brown egg layers. Comparing non-commercial and commercial breeds resulted in the detection of 630 selective signatures, while 656 selective signatures were detected in the comparison between the commercial egg-layer breeds. Annotation of selection signature regions revealed various genes corresponding to productions traits, for which layer breeds were selected. Among them were NCOA1, SREBF2 and RALGAPA1 associated with reproductive traits, broodiness and egg production. Furthermore, several of the detected genes were associated with growth and carcass traits, including POMC, PRKAB2, SPP1, IGF2, CAPN1, TGFb2 and IGFBP2. Our approach demonstrates that including different populations with a specific breeding history can provide a unique opportunity for a better understanding of farm animal selection.

  16. Pseudopotential-based electron quantum transport: Theoretical formulation and application to nanometer-scale silicon nanowire transistors

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jingtian, E-mail: jingtian.fang@utdallas.edu; Vandenberghe, William G.; Fu, Bo; Fischetti, Massimo V. [Department of Materials Science and Engineering, The University of Texas at Dallas, Richardson, Texas 75080 (United States)

    2016-01-21

    We present a formalism to treat quantum electronic transport at the nanometer scale based on empirical pseudopotentials. This formalism offers explicit atomistic wavefunctions and an accurate band structure, enabling a detailed study of the characteristics of devices with a nanometer-scale channel and body. Assuming externally applied potentials that change slowly along the electron-transport direction, we invoke the envelope-wavefunction approximation to apply the open boundary conditions and to develop the transport equations. We construct the full-band open boundary conditions (self-energies of device contacts) from the complex band structure of the contacts. We solve the transport equations and present the expressions required to calculate the device characteristics, such as device current and charge density. We apply this formalism to study ballistic transport in a gate-all-around (GAA) silicon nanowire field-effect transistor with a body-size of 0.39 nm, a gate length of 6.52 nm, and an effective oxide thickness of 0.43 nm. Simulation results show that this device exhibits a subthreshold slope (SS) of ∼66 mV/decade and a drain-induced barrier-lowering of ∼2.5 mV/V. Our theoretical calculations predict that low-dimensionality channels in a 3D GAA architecture are able to meet the performance requirements of future devices in terms of SS swing and electrostatic control.

  17. An empirical evaluation of two theoretically-based hypotheses on the directional association between self-worth and hope.

    Science.gov (United States)

    McDavid, Lindley; McDonough, Meghan H; Smith, Alan L

    2015-06-01

    Fostering self-worth and hope are important goals of positive youth development (PYD) efforts, yet intervention design is complicated by contrasting theoretical hypotheses regarding the directional association between these constructs. Therefore, within a longitudinal design we tested: (1) that self-worth predicts changes in hope (self theory; Harter, 1999), and (2) that hope predicts changes in self-worth (hope theory; Snyder, 2002) over time. Youth (N = 321; Mage = 10.33 years) in a physical activity-based PYD program completed surveys 37-45 days prior to and on the second day and third-to-last day of the program. A latent variable panel model that included autoregressive and cross-lagged paths indicated that self-worth was a significant predictor of change in hope, but hope did not predict change in self-worth. Therefore, the directional association between self-worth and hope is better explained by self-theory and PYD programs should aim to enhance perceptions of self-worth to build perceptions of hope. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Risk-Sensitive Multiagent Decision-Theoretic Planning Based on MDP and One-Switch Utility Functions

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    2014-01-01

    Full Text Available In high stakes situations decision-makers are often risk-averse and decision-making processes often take place in group settings. This paper studies multiagent decision-theoretic planning under Markov decision processes (MDPs framework with considering the change of agent’s risk attitude as his wealth level varies. Based on one-switch utility function that describes agent’s risk attitude change with his wealth level, we give the additive and multiplicative aggregation models of group utility and adopt maximizing expected group utility as planning objective. When the wealth level approaches infinity, the characteristics of optimal policy are analyzed for the additive and multiplicative aggregation model, respectively. Then a backward-induction method is proposed to divide the wealth level interval from negative infinity to initial wealth level into subintervals and determine the optimal policy in states and subintervals. The proposed method is illustrated by numerical examples and the influences of agent’s risk aversion parameters and weights on group decision-making are also analyzed.

  19. How and for whom does web-based acceptance and commitment therapy work? Mediation and moderation analyses of web-based ACT for depressive symptoms.

    Science.gov (United States)

    Pots, Wendy T M; Trompetter, Hester R; Schreurs, Karlein M G; Bohlmeijer, Ernst T

    2016-05-23

    Acceptance and Commitment Therapy (ACT) has been demonstrated to be effective in reducing depressive symptoms. However, little is known how and for whom therapeutic change occurs, specifically in web-based interventions. This study focuses on the mediators, moderators and predictors of change during a web-based ACT intervention. Data from 236 adults from the general population with mild to moderate depressive symptoms, randomized to either web-based ACT (n = 82) or one of two control conditions (web-based Expressive Writing (EW; n = 67) and a waiting list (n = 87)), were analysed. Single and multiple mediation analyses, and exploratory linear regression analyses were performed using PROCESS and linear regression analyses, to examine mediators, moderators and predictors on pre- to post- and follow-up treatment change of depressive symptoms. The treatment effect of ACT versus the waiting list was mediated by psychological flexibility and two mindfulness facets. The treatment effect of ACT versus EW was not significantly mediated. The moderator analyses demonstrated that the effects of web-based ACT did not vary according to baseline patient characteristics when compared to both control groups. However, higher baseline depressive symptoms and positive mental health and lower baseline anxiety were identified as predictors of outcome across all conditions. Similar results are found for follow-up. The findings of this study corroborate the evidence that psychological flexibility and mindfulness are distinct process mechanisms that mediate the effects of web-based ACT intervention. The results indicate that there are no restrictions to the allocation of web-based ACT intervention and that web-based ACT can work for different subpopulations. Netherlands Trial Register NTR2736 . Registered 6 February 2011.

  20. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  1. Investigation of a wet ethanol operated HCCI engine based on first and second law analyses

    International Nuclear Information System (INIS)

    Khaliq, Abdul; Trivedi, Shailesh K.; Dincer, Ibrahim

    2011-01-01

    are in the HCCI engine (around 89%) followed by fuel vaporizer (4.9%) and catalytic converter (4.5%). → Based on simulation results, it is found that second law efficiency of wet ethanol operated HCCI engine is higher than the pure ethanol fuelled HCCI engine.

  2. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  3. Structural, Spectroscopic (FT-IR, Raman and NMR, Non-linear Optical (NLO, HOMO-LUMO and Theoretical (DFT/CAM-B3LYP Analyses of N-Benzyloxycarbonyloxy-5-Norbornene-2,3-Dicarboximide Molecule

    Directory of Open Access Journals (Sweden)

    Nuri ÖZTÜRK

    2018-02-01

    Full Text Available The experimental spectroscopic investigation of N-benzyloxycarbonyloxy-5-norbornene-2,3-dicarboximide (C17H15NO5 molecule has been done using 1H and 13C NMR chemical shifts, FT-IR and Raman spectroscopies. Conformational forms have been determined depending on orientation of N-benzyloxycarbonyloxy and 5-norbornene-2,3-dicarboximide (NDI groups of the title compound. The structural geometric optimizations, vibrational wavenumbers, NMR chemical shifts (in vacuum and chloroform and HOMO-LUMO analyses for all conformers of the title molecule have been done with DFT/CAM-B3LYP method at the 6-311++G(d,p basis set. Additionally, based on the calculated HOMO and LUMO energy values, some molecular properties such as ionization potential (I, electron affinity (A, electronegativity (χ, chemical hardness (h, chemical softness (z, chemical potential (μ and electrophilicity index (w parameters are determined for all conformers. The non-linear optical (NLO properties have been studied for the title molecule. We can say that the experimental spectral data are in accordance with calculated values.

  4. Quantitative X-ray Map Analyser (Q-XRMA): A new GIS-based statistical approach to Mineral Image Analysis

    Science.gov (United States)

    Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino

    2018-06-01

    We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.

  5. Randomized Trial of ConquerFear: A Novel, Theoretically Based Psychosocial Intervention for Fear of Cancer Recurrence.

    Science.gov (United States)

    Butow, Phyllis N; Turner, Jane; Gilchrist, Jemma; Sharpe, Louise; Smith, Allan Ben; Fardell, Joanna E; Tesson, Stephanie; O'Connell, Rachel; Girgis, Afaf; Gebski, Val J; Asher, Rebecca; Mihalopoulos, Cathrine; Bell, Melanie L; Zola, Karina Grunewald; Beith, Jane; Thewes, Belinda

    2017-12-20

    Purpose Fear of cancer recurrence (FCR) is prevalent, distressing, and long lasting. This study evaluated the impact of a theoretically/empirically based intervention (ConquerFear) on FCR. Methods Eligible survivors had curable breast or colorectal cancer or melanoma, had completed treatment (not including endocrine therapy) 2 months to 5 years previously, were age > 18 years, and had scores above the clinical cutoff on the FCR Inventory (FCRI) severity subscale at screening. Participants were randomly assigned at a one-to-one ratio to either five face-to-face sessions of ConquerFear (attention training, metacognitions, acceptance/mindfulness, screening behavior, and values-based goal setting) or an attention control (Taking-it-Easy relaxation therapy). Participants completed questionnaires at baseline (T0), immediately post-therapy (T1), and 3 (T2) and 6 months (T3) later. The primary outcome was FCRI total score. Results Of 704 potentially eligible survivors from 17 sites and two online databases, 533 were contactable, of whom 222 (42%) consented; 121 were randomly assigned to intervention and 101 to control. Study arms were equivalent at baseline on all measured characteristics. ConquerFear participants had clinically and statistically greater improvements than control participants from T0 to T1 on FCRI total ( P psychological distress, and triggers) as well as in general anxiety, cancer-specific distress (total), and mental quality of life and metacognitions (total). Differences in FCRI psychological distress and cancer-specific distress (total) remained significantly different at T3. Conclusion This randomized trial demonstrated efficacy of ConquerFear compared with attention control (Taking-it-Easy) in reduction of FCRI total scores immediately post-therapy and 3 and 6 months later and in many secondary outcomes immediately post-therapy. Cancer-specific distress (total) remained more improved at 3- and 6-month follow-up.

  6. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    This paper discusses the practice of cost-benefit analyses of transportation infrastructure investment projects from the meta-theoretical perspective of critical realism. Such analyses are based on a number of untenable ontological assumptions about social value, human nature and the natural......-to-pay investigations. Accepting the ontological and epistemological assumptions of cost-benefit analysis involves an implicit acceptance of the ethical and political values favoured by these assumptions. Cost-benefit analyses of transportation investment projects tend to neglect long-term environmental consequences...

  7. Theoretical physics

    CERN Document Server

    Joos, Georg

    1986-01-01

    Among the finest, most comprehensive treatments of theoretical physics ever written, this classic volume comprises a superb introduction to the main branches of the discipline and offers solid grounding for further research in a variety of fields. Students will find no better one-volume coverage of so many essential topics; moreover, since its first publication, the book has been substantially revised and updated with additional material on Bessel functions, spherical harmonics, superconductivity, elastomers, and other subjects.The first four chapters review mathematical topics needed by theo

  8. Theoretical physics

    International Nuclear Information System (INIS)

    Laval, G.

    1988-01-01

    The 1988 progress report of the theoretical Physics Center (Ecole Polytechnique, France), is presented. The research activities are carried out in the fields of the supersymmetry theory, the dynamic systems theory, the statistical mechanics, the plasma physics and the random media. Substantial improvements are obtained on dynamical system investigations. In the field theory, the definition of the Gross-Neveu model is achieved. However the construction of the non-abelian gauge theories and the conformal theories are the main research activities. Concerning Astrophysics, a three-dimensional gravitational code is obtained. The activities of each team, and the list of the published papers, congress communications and thesis are given [fr

  9. Theoretical physics

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The nuclear theory program deals with the properties of nuclei and with the reactions and interactions between nuclei and a variety of projectiles. The main areas of concentration are: heavy-ion direct reactions at nonrelativistic energies; nuclear shell theory and nuclear structure; nuclear matter and nuclear forces;intermediate-energy physics and pion-nucleus interactions; and high-energy collisions of heavy ions. Recent progress and plans for future work in these five main areas of concentration and a summary of other theoretical studies currently in progress or recently completed are presented

  10. Sensitivity analyses of woody species exposed to air pollution based on ecophysiological measurements.

    Science.gov (United States)

    Wen, Dazhi; Kuang, Yuanwen; Zhou, Guoyi

    2004-01-01

    variation of Fv/Fm appeared in the other two species, particularly in M. chinensis, suggesting that they were more sensitive to air pollutants than I. rotunda. The mean LA was reduced for all species growing at the polluted site. The mean LMA for all species exceeded the sclerophylly threshold given by Cowling and Campbell and increased for those under pollution stress, which could be explained as one of the acclimation strategies for plants to air pollution stress. Little difference in leaf chlorophyll content was observed in F. microcarpa and M. chinensis, while remarkable differences were found in I. rotunda growing at the polluted and the clean site. Content of leaf carotenoids was largely reduced in I. rotunda growing at the polluted site, but increased in F. microcarpa and M. chinensis, compared with plants growing at the clean site. Plants growing at the clean site had a lower leaf N content than those growing at the polluted site. In addition, species with a higher resistance to pollution stress showed less difference in leaf N content than those sensitive species. Based on Fv/Fm measurements of the three woody species, I. rotunda showed the highest resistance to air pollutants from ceramic industries, followed by F. microcarpa. M. chinensis was the most sensitive species to air pollution, had lowest capacities to cope with the air pollution stress, which was consistent with visual injury symptoms observed in the crown profiles of plants at the polluted site. Fv/Fm, LAM, LA, leaf pigments and N content could be used alone or in combination to diagnose the extent of the physiological injury. The ratio of Fv/Fm, however, was the best and most effective parameter. Tree species which have higher air-pollutant resistance, as diagnosed by such ecophysiological parameters, should be considered first and planted widely for urban afforestation or forest regeneration in areas where the forest was seriously degraded or forest health was markedly effected by the same kind of

  11. Using the Theoretical Domains Framework (TDF) to understand adherence to multiple evidence-based indicators in primary care: a qualitative study.

    Science.gov (United States)

    Lawton, Rebecca; Heyhoe, Jane; Louch, Gemma; Ingleson, Emma; Glidewell, Liz; Willis, Thomas A; McEachan, Rosemary R C; Foy, Robbie

    2016-08-08

    There are recognised gaps between evidence and practice in general practice, a setting posing particular implementation challenges. We earlier screened clinical guideline recommendations to derive a set of 'high-impact' indicators based upon criteria including potential for significant patient benefit, scope for improved practice and amenability to measurement using routinely collected data. Here, we explore health professionals' perceived determinants of adherence to these indicators, examining the degree to which determinants were indicator-specific or potentially generalisable across indicators. We interviewed 60 general practitioners, practice nurses and practice managers in West Yorkshire, the UK, about adherence to four indicators: avoidance of risky prescribing; treatment targets in type 2 diabetes; blood pressure targets in treated hypertension; and anticoagulation in atrial fibrillation. Interview questions drew upon the Theoretical Domains Framework (TDF). Data were analysed using framework analysis. Professional role and identity and environmental context and resources featured prominently across all indicators whilst the importance of other domains, for example, beliefs about consequences, social influences and knowledge varied across indicators. We identified five meta-themes representing more general organisational and contextual factors common to all indicators. The TDF helped elicit a wide range of reported determinants of adherence to 'high-impact' indicators in primary care. It was more difficult to pinpoint which determinants, if targeted by an implementation strategy, would maximise change. The meta-themes broadly underline the need to align the design of interventions targeting general practices with higher level supports and broader contextual considerations. However, our findings suggest that it is feasible to develop interventions to promote the uptake of different evidence-based indicators which share common features whilst also including

  12. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Directory of Open Access Journals (Sweden)

    Akitoshi Ogawa

    Full Text Available The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion. Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround, 3D with monaural sound (3D-Mono, 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG. The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life

  13. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    Science.gov (United States)

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  14. CrusView: A Java-Based Visualization Platform for Comparative Genomics Analyses in Brassicaceae Species[OPEN

    Science.gov (United States)

    Chen, Hao; Wang, Xiangfeng

    2013-01-01

    In plants and animals, chromosomal breakage and fusion events based on conserved syntenic genomic blocks lead to conserved patterns of karyotype evolution among species of the same family. However, karyotype information has not been well utilized in genomic comparison studies. We present CrusView, a Java-based bioinformatic application utilizing Standard Widget Toolkit/Swing graphics libraries and a SQLite database for performing visualized analyses of comparative genomics data in Brassicaceae (crucifer) plants. Compared with similar software and databases, one of the unique features of CrusView is its integration of karyotype information when comparing two genomes. This feature allows users to perform karyotype-based genome assembly and karyotype-assisted genome synteny analyses with preset karyotype patterns of the Brassicaceae genomes. Additionally, CrusView is a local program, which gives its users high flexibility when analyzing unpublished genomes and allows users to upload self-defined genomic information so that they can visually study the associations between genome structural variations and genetic elements, including chromosomal rearrangements, genomic macrosynteny, gene families, high-frequency recombination sites, and tandem and segmental duplications between related species. This tool will greatly facilitate karyotype, chromosome, and genome evolution studies using visualized comparative genomics approaches in Brassicaceae species. CrusView is freely available at http://www.cmbb.arizona.edu/CrusView/. PMID:23898041

  15. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  16. Design and development of microcontroller-based clinical chemistry analyser for measurement of various blood biochemistry parameters.

    Science.gov (United States)

    Taneja, S R; Gupta, R C; Kumar, Jagdish; Thariyan, K K; Verma, Sanjeev

    2005-01-01

    Clinical chemistry analyser is a high-performance microcontroller-based photometric biochemical analyser to measure various blood biochemical parameters such as blood glucose, urea, protein, bilirubin, and so forth, and also to measure and observe enzyme growth occurred while performing the other biochemical tests such as ALT (alkaline amino transferase), amylase, AST (aspartate amino transferase), and so forth. These tests are of great significance in biochemistry and used for diagnostic purposes and classifying various disorders and diseases such as diabetes, liver malfunctioning, renal diseases, and so forth. An inexpensive clinical chemistry analyser developed by the authors is described in this paper. This is an open system in which any reagent kit available in the market can be used. The system is based on the principle of absorbance transmittance photometry. System design is based around 80C31 microcontroller with RAM, EPROM, and peripheral interface devices. The developed system incorporates light source, an optical module, interference filters of various wave lengths, peltier device for maintaining required temperature of the mixture in flow cell, peristaltic pump for sample aspiration, graphic LCD display for displaying blood parameters, patients test results and kinetic test graph, 40 columns mini thermal printer, and also 32-key keyboard for executing various functions. The lab tests conducted on the instrument include versatility of the analyzer, flexibility of the software, and treatment of sample. The prototype was tested and evaluated over 1000 blood samples successfully for seventeen blood parameters. Evaluation was carried out at Government Medical College and Hospital, the Department of Biochemistry. The test results were found to be comparable with other standard instruments.

  17. Candelariella placodizans (Candelariaceae reported new to mainland China and Taiwan based on morphological, chemical and molecular phylogenetic analyses

    Directory of Open Access Journals (Sweden)

    Lidia Yakovchenko

    2016-06-01

    Full Text Available Candelariella placodizans is newly reported from China. It was collected on exposed rocks with mosses on the alpine areas of Taiwan and Yunnan Province, China at elevation between 3200-4400 m. Molecular phylogenetic analyses based on ITS rDNA sequences were also performed to confirm the monophyly of the Chinese populations with respect to already existing sequences of the species, and then further to examine their relationships to other members of the genus. An identification key to all 14 known taxa of Candelariella in China is provided.

  18. Exergy, exergoeconomic and environmental analyses and evolutionary algorithm based multi-objective optimization of combined cycle power plants

    International Nuclear Information System (INIS)

    Ahmadi, Pouria; Dincer, Ibrahim; Rosen, Marc A.

    2011-01-01

    A comprehensive exergy, exergoeconomic and environmental impact analysis and optimization is reported of several combined cycle power plants (CCPPs). In the first part, thermodynamic analyses based on energy and exergy of the CCPPs are performed, and the effect of supplementary firing on the natural gas-fired CCPP is investigated. The latter step includes the effect of supplementary firing on the performance of bottoming cycle and CO 2 emissions, and utilizes the first and second laws of thermodynamics. In the second part, a multi-objective optimization is performed to determine the 'best' design parameters, accounting for exergetic, economic and environmental factors. The optimization considers three objective functions: CCPP exergy efficiency, total cost rate of the system products and CO 2 emissions of the overall plant. The environmental impact in terms of CO 2 emissions is integrated with the exergoeconomic objective function as a new objective function. The results of both exergy and exergoeconomic analyses show that the largest exergy destructions occur in the CCPP combustion chamber, and that increasing the gas turbine inlet temperature decreases the CCPP cost of exergy destruction. The optimization results demonstrates that CO 2 emissions are reduced by selecting the best components and using a low fuel injection rate into the combustion chamber. -- Highlights: → Comprehensive thermodynamic modeling of a combined cycle power plant. → Exergy, economic and environmental analyses of the system. → Investigation of the role of multiobjective exergoenvironmental optimization as a tool for more environmentally-benign design.

  19. Using a laser-based CO2 carbon isotope analyser to investigate gas transfer in geological media

    International Nuclear Information System (INIS)

    Guillon, S.; Pili, E.; Agrinier, P.

    2012-01-01

    CO 2 stable carbon isotopes are very attractive in environmental research to investigate both natural and anthropogenic carbon sources. Laser-based CO 2 carbon isotope analysis provides continuous measurement at high temporal resolution and is a promising alternative to isotope ratio mass spectrometry (IRMS). We performed a thorough assessment of a commercially available CO 2 Carbon Isotope Analyser (CCIA DLT-100, Los Gatos Research) that allows in situ measurement of C-13 in CO 2 . Using a set of reference gases of known CO 2 concentration and carbon isotopic composition, we evaluated the precision, long-term stability, temperature sensitivity and concentration dependence of the analyser. Despite good precision calculated from Allan variance (5.0 ppm for CO 2 concentration, and 0.05 per thousand for δC-13 at 60 s averaging), real performances are altered by two main sources of error: temperature sensitivity and dependence of C-13 on CO 2 concentration. Data processing is required to correct for these errors. Following application of these corrections, we achieve an accuracy of 8.7 ppm for CO 2 concentration and 1.3 per thousand for δC-13, which is worse compared to mass spectrometry performance, but still allowing field applications. With this portable analyser we measured CO 2 flux degassed from rock in an underground tunnel. The obtained carbon isotopic composition agrees with IRMS measurement, and can be used to identify the carbon source. (authors)

  20. Theoretical and experimental analysis of cyclic stresses in gas turbine rotor blades, taking thermal fatigue into account (low cycle fatigue). Theoretische und experimentelle Analyse der zyklischen Beanspruchung von Gasturbinenlaufschaufeln unter besonderer Beruecksichtigung der thermischen Ermuedung (low cycle fatigue)

    Energy Technology Data Exchange (ETDEWEB)

    Hoelscher, R.

    1982-08-01

    The author is concerned with determining the life of highly stressed hot components of gas turbines. The main point of the experimental and theoretical investigations is the analysis of the cyclic stresses of an uncooled turbine rotor blade of an aircraft gas turbine ATAR 101. Apart from simulating cyclic load changes of turbine blades on a model test rig, models of service life predictions are prepared and tested. (HAG).