Effect of applied mechanical stress on absorption coefficient of compounds
Energy Technology Data Exchange (ETDEWEB)
Gupta, Manoj Kumar, E-mail: mkgupta.sliet@gmail.com [Department of Applied Sciences, Bhai Gurdas Institute of Engineering and Technology, Sangrur (India); Singh, Gurinderjeet; Dhaliwal, A. S.; Kahlon, K. S. [Department of Physics, Sant Longowal Institute of Engineering & Technology Deemed University, Longowal (Sangrur) India-148106 (India)
2015-08-28
The absorption coefficient of given materials is the parameter required for the basic information. The measurement of absorption coefficient of compounds Al{sub 2}O{sub 3}, CaCO{sub 3}, ZnO{sub 2}, SmO{sub 2} and PbO has been taken at different incident photon energies 26, 59.54, 112, 1173, 1332keV. The studies involve the measurements of absorption coefficient of the self supporting samples prepared under different mechanical stress. This mechanical stress is render in terms of pressure up to 0-6 ton by using hydraulic press. Measurements shows that absorption coefficient of a material is directly proportional to applied mechanical stress on it up to some extent then become independent. Experimentally measured results are in fairly good agreement with in theoretical values obtained from WinXCOM.
Problems is applying new internal dose coefficients to radiation control
Energy Technology Data Exchange (ETDEWEB)
Sato, Yuichi [Oarai Laboratory, Chiyoda Technol Corporation, Ibaraki (Japan)
1998-06-01
The author discussed problems concerning the conceivable influence in the radiation control and those newly developing when the new internal dose coefficients are applied in the law in the future. For the conceivable influence, the occupational and public exposure was discussed: In the former, the effective dose equivalent limit (at present, 50 mSv/y) was thought to be reduced and in the latter, the limit to be obscure although it might be more greatly influenced by the new coefficients. For newly developing problems, since the new biological model which is more realistic was introduced for calculation of the internal dose and made the calculation more complicated, use of computer is requisite. The effective dose of the internal exposure in the individual monitoring should be conveniently calculated as done at present even after application of the new coefficients. For calculation of the effective dose of the internal exposure, there are such problems as correction of the inhaled particle size and of the individual personal parameter. A model calculation of residual rate in the chest where the respiratory tract alone participated was presented as an example but for the whole body, more complicated functions were pointed out necessary. The concept was concluded to be incorporated in the law in a convenient and easy manner and a software for calculation of internal dose using the new coefficients was wanted. (K.H.)
A new approach to estimate Angstrom coefficients
International Nuclear Information System (INIS)
Abdel Wahab, M.
1991-09-01
A simple quadratic equation to estimate global solar radiation with coefficients depending on some physical atmospheric parameters is presented. The importance of the second order and sensitivity to some climatic variations is discussed. (author). 8 refs, 4 figs, 2 tabs
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
A nodal method applied to a diffusion problem with generalized coefficients
International Nuclear Information System (INIS)
Laazizi, A.; Guessous, N.
1999-01-01
In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)
An approach for fixed coefficient RNS-based FIR filter
Srinivasa Reddy, Kotha; Sahoo, Subhendu Kumar
2017-08-01
In this work, an efficient new modular multiplication method for {2k-1, 2k, 2k+1-1} moduli set is proposed to implement a residue number system (RNS)-based fixed coefficient finite impulse response filter. The new multiplication approach reduces the number of partial products by using pre-loaded product block. The reduction in partial products with the proposed modular multiplication improves the clock frequency and reduces the area and power as compared with the conventional modular multiplication. Further, the present approach eliminates a binary number to residue number converter circuit, which is usually needed at the front end of RNS-based system. In this work, two fixed coefficient filter architectures with the new modular multiplication approach are proposed. The filters are implemented using Verilog hardware description language. The United Microelectronics Corporation 90 nm technology library has been used for synthesis and the results area, power and delay are obtained with the help of Cadence register transfer level compiler. The power delay product (PDP) is also considered for performance comparison among the proposed filters. One of the proposed architecture is found to improve PDP gain by 60.83% as compared with the filter implemented with conventional modular multiplier. The filters functionality is validated with the help of Altera DSP Builder.
Spectral approach to homogenization of hyperbolic equations with periodic coefficients
Dorodnyi, M. A.; Suslina, T. A.
2018-06-01
In L2 (Rd ;Cn), we consider selfadjoint strongly elliptic second order differential operators Aε with periodic coefficients depending on x / ε, ε > 0. We study the behavior of the operators cos (Aε1/2 τ) and Aε-1/2 sin (Aε1/2 τ), τ ∈ R, for small ε. Approximations for these operators in the (Hs →L2)-operator norm with a suitable s are obtained. The results are used to study the behavior of the solution vε of the Cauchy problem for the hyperbolic equation ∂τ2 vε = -Aεvε + F. General results are applied to the acoustics equation and the system of elasticity theory.
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
An experimental approach of decoupling Seebeck coefficient and electrical resistivity
Muhammed Sabeer N., A.; Paulson, Anju; Pradyumnan, P. P.
2018-04-01
The Thermoelectrics (TE) has drawn increased attention among renewable energy technologies. The performance of a thermoelectric material is quantified by a dimensionless thermoelectric figure of merit, ZT=S2σT/κ, where S and σ vary inversely each other. Thus, improvement in ZT is not an easy task. So, researchers have been trying different parameter variations during thin film processing to improve TE properties. In this work, tin nitride (Sn3N4) thin films were deposited on glass substrates by reactive RF magnetron sputtering and investigated its thermoelectric response. To decouple the covariance nature of Seebeck coefficient and electrical resistivity for the enhancement of power factor (S2σ), the nitrogen gas pressure during sputtering was reduced. Reduction in nitrogen gas pressure reduced both sputtering pressure and amount of nitrogen available for reaction during sputtering. This experimental approach of combined effect introduced preferred orientation and stoichiometric variations simultaneously in the sputtered Sn3N4 thin films. The scattering mechanism associated with these variations enhanced TE properties by independently drive the Seebeck coefficient and electrical resistivity parameters.
Muwamba, A; Nkedi-Kizza, P; Morgan, K T
2016-09-01
Phosphorus is among the essential nutrients applied to sugarcane ( L.) fields in the form of a fertilizer mixture (N, P, and K) in southwestern Florida. Sorption coefficient is used for modeling P movement, and in this study, we hypothesized that the sorption coefficient determined using fertilizer mixture (N, P, and K) will be significantly different from values determined using KCl and CaCl, the electrolytes most commonly used for conducting sorption experiments. Supporting electrolytes, 0.01 mol L KCl, 0.005 mol L CaCl, deionized (DI) water, simulated Florida rain, and fertilizer mixture prepared in Florida rain were used to characterize P sorption. Immokalee (Sandy, siliceous, hyperthermic Arenic Alaquods) and Margate (Sandy, siliceous hyperthermic Mollic Psammaquents) are the dominant mineral soils used for sugarcane production in southwestern Florida; we used the A and B horizons of Margate soil and the A and B horizons of the Immokalee soil for sorption experiments in this study. Freundlich sorption isotherms described P sorption data. The Freundlich sorption isotherm coefficients followed the trend 0.005 mol L CaCl > 0.01 mol L KCl ≈ fertilizer mixture > simulated Florida rain ≈ DI water. Sorption coefficients were used for modeling P movement with HYDRUS 1D; similar P results were obtained with the 0.01 mol L KCl and fertilizer mixture electrolyte treatments. The sorption coefficient for DI water and simulated Florida rain overpredicted P movement. The P sorption data showed the importance of choosing the appropriate electrolyte for conducting experiments based on the composition of fertilizer. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems
International Nuclear Information System (INIS)
Andrade Lima, F.R. de
1993-01-01
The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)
International Nuclear Information System (INIS)
Wu Qingjie; Guo Kangxian; Liu Guanghui; Wu Jinghe
2013-01-01
Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with the radial parabolic potential and the z-direction linear potential with applied magnetic field are theoretically investigated. The optical absorption coefficients and refractive index changes are presented by using the compact-density-matrix approach and iterative method. Numerical calculations are presented for GaAs/AlGaAs. It is found that taking into account the electron-LO-phonon interaction, not only are the linear, the nonlinear and the total optical absorption coefficients and refractive index changes enhanced, but also the total optical absorption coefficients are more sensitive to the incident optical intensity. It is also found that no matter whether the electron-LO-phonon interaction is considered or not, the absorption coefficients and refractive index changes above are strongly dependent on the radial frequency, the magnetic field and the linear potential coefficient.
Network clustering coefficient approach to DNA sequence analysis
Energy Technology Data Exchange (ETDEWEB)
Gerhardt, Guenther J.L. [Universidade Federal do Rio Grande do Sul-Hospital de Clinicas de Porto Alegre, Rua Ramiro Barcelos 2350/sala 2040/90035-003 Porto Alegre (Brazil); Departamento de Fisica e Quimica da Universidade de Caxias do Sul, Rua Francisco Getulio Vargas 1130, 95001-970 Caxias do Sul (Brazil); Lemke, Ney [Programa Interdisciplinar em Computacao Aplicada, Unisinos, Av. Unisinos, 950, 93022-000 Sao Leopoldo, RS (Brazil); Corso, Gilberto [Departamento de Biofisica e Farmacologia, Centro de Biociencias, Universidade Federal do Rio Grande do Norte, Campus Universitario, 59072 970 Natal, RN (Brazil)]. E-mail: corso@dfte.ufrn.br
2006-05-15
In this work we propose an alternative DNA sequence analysis tool based on graph theoretical concepts. The methodology investigates the path topology of an organism genome through a triplet network. In this network, triplets in DNA sequence are vertices and two vertices are connected if they occur juxtaposed on the genome. We characterize this network topology by measuring the clustering coefficient. We test our methodology against two main bias: the guanine-cytosine (GC) content and 3-bp (base pairs) periodicity of DNA sequence. We perform the test constructing random networks with variable GC content and imposed 3-bp periodicity. A test group of some organisms is constructed and we investigate the methodology in the light of the constructed random networks. We conclude that the clustering coefficient is a valuable tool since it gives information that is not trivially contained in 3-bp periodicity neither in the variable GC content.
Directory of Open Access Journals (Sweden)
K N Pushpalatha
2017-05-01
Full Text Available In an era of advanced computer technology world where innumerable services such as access to bank accounts, or access to secured data or entry to some national important organizations require authentication of genuine individual. Among all biometric personal identification systems, fingerprint recognition system is most accurate and economical technology. In this paper we have proposed fingerprint recognition system using Local Walsh Hadamard Transform (LWHT with Phase Magnitude Histograms (PMHs for feature extraction. Fingerprints display oriented texture-like patterns. Gabor filters have the property of capturing global and local texture information from blur or unclear images and filter bank provides the orientation features which are robust to image distortion and rotation. The LWHT algorithm is compared with other two approaches viz., Gabor Coefficients and Directional Features. The three methods are compared using FVC 2006 Finger print database images. It is found from the observation that the values of TSR, FAR and FRR have improved results compared to existing algorithm.
Applying discursive approaches to health psychology.
Seymour-Smith, Sarah
2015-04-01
The aim of this paper is to outline the contribution of two strands of discursive research, glossed as 'macro' and 'micro,' to the field of health psychology. A further goal is to highlight some contemporary debates in methodology associated with the use of interview data versus more naturalistic data in qualitative health research. Discursive approaches provide a way of analyzing talk as a social practice that considers how descriptions are put together and what actions they achieve. A selection of recent examples of discursive research from one applied area of health psychology, studies of diet and obesity, are drawn upon in order to illustrate the specifics of both strands. 'Macro' discourse work in psychology incorporates a Foucauldian focus on the way that discourses regulate subjectivities, whereas the concept of interpretative repertoires affords more agency to the individual: both are useful for identifying the cultural context of talk. Both 'macro' and 'micro' strands focus on accountability to varying degrees. 'Micro' Discursive Psychology, however, pays closer attention to the sequential organization of constructions and focuses on naturalistic settings that allow for the inclusion of an analysis of the health professional. Diets are typically depicted as an individual responsibility in mainstream health psychology, but discursive research highlights how discourses are collectively produced and bound up with social practices. (c) 2015 APA, all rights reserved).
Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction
Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.
1987-01-01
Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.
MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACH
Omar AlSheikSalem
2016-01-01
In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating healthcare applications and services is one of the vast data approaches that can be adapted to mobile cloud computing. This work proposes a framework of a global healthcare computing based combining both mobile computing and cloud computing. This approach leads to integrate all of ...
Recovering a coefficient in a parabolic equation using an iterative approach
Azhibekova, Aliya S.
2016-06-01
In this paper we are concerned with the problem of determining a coefficient in a parabolic equation using an iterative approach. We investigate an inverse coefficient problem in the difference form. To recover the coefficient, we minimize a residual functional between the observed and calculated values. This is done in a constructive way by fitting a finite-difference approximation to the inverse problem. We obtain some theoretical estimates for a direct and adjoint problem. Using these estimates we prove monotonicity of the objective functional and the convergence of iteration sequences.
Applying a gaming approach to IP strategy.
Gasnier, Arnaud; Vandamme, Luc
2010-02-01
Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
H. Davari
2015-01-01
The emergence of some significant critical approaches and directions in the field of applied linguistics from the mid-1980s onwards has met with various positive and opposite reactions. On the basis of their strength and significance, such approaches and directions have challenged some of the mainstream approaches’ claims, principles and assumptions. Among them, critical applied linguistics can be highlighted as a new approach, developed by the Australian applied linguist, Alastair Pennycook....
A Compensatory Approach to Multiobjective Linear Transportation Problem with Fuzzy Cost Coefficients
Directory of Open Access Journals (Sweden)
Hale Gonce Kocken
2011-01-01
Full Text Available This paper deals with the Multiobjective Linear Transportation Problem that has fuzzy cost coefficients. In the solution procedure, many objectives may conflict with each other; therefore decision-making process becomes complicated. And also due to the fuzziness in the costs, this problem has a nonlinear structure. In this paper, fuzziness in the objective functions is handled with a fuzzy programming technique in the sense of multiobjective approach. And then we present a compensatory approach to solve Multiobjective Linear Transportation Problem with fuzzy cost coefficients by using Werner's and operator. Our approach generates compromise solutions which are both compensatory and Pareto optimal. A numerical example has been provided to illustrate the problem.
International Nuclear Information System (INIS)
Wattez, T.
2013-01-01
previously formulated. The formation factor, as well as the effective diffusion coefficient, does not depend on the ionic strength of the material pore solution, this being validated for solutions of different composition encompassing the cement materials pore solution diversity. The formation factor also does not vary when the amplitude of the applied electrical field varies, provided both the test duration and the electrical field amplitude are kept within acceptable boundaries. Finally, the comparison between the values of the effective diffusion coefficient obtained with both the constant field migration test and the natural diffusion techniques, for perfectly conditioned and prepared materials, leads us to invalidate the assumption that the effects of the double electrical layer are negligible. (author) [fr
Directory of Open Access Journals (Sweden)
Monika GARG
2012-08-01
Full Text Available In this paper, an integrated approach is proposed for non-recursive formulation of connection coefficients of different orthogonal functions in terms of a generic orthogonal function. The application of these coefficients arises when the product of two orthogonal basis functions are to be expressed in terms of single basis functions. Two significant advantages are achieved; one, the non-recursive formulations avoid memory and stack overflows in computer implementations; two, the integrated approach provides for digital hardware once-designed can be used for different functions. Computational savings achieved with the proposed non-recursive formulation vis-à-vis recursive formulation, reported in the literature so far, have been demonstrated using MATLAB PROFILER.
Determination of the frictional coefficient of the implant-antler interface: experimental approach.
Hasan, Istabrak; Keilig, Ludger; Staat, Manfred; Wahl, Gerhard; Bourauel, Christoph
2012-10-01
The similar bone structure of reindeer antler to human bone permits studying the osseointegration of dental implants in the jawbone. As the friction is one of the major factors that have a significant influence on the initial stability of immediately loaded dental implants, it is essential to define the frictional coefficient of the implant-antler interface. In this study, the kinetic frictional forces at the implant-antler interface were measured experimentally using an optomechanical setup and a stepping motor controller under different axial loads and sliding velocities. The corresponding mean values of the static and kinetic frictional coefficients were within the range of 0.5-0.7 and 0.3-0.5, respectively. An increase in the frictional forces with increasing applied axial loads was registered. The measurements showed an evidence of a decrease in the magnitude of the frictional coefficient with increasing sliding velocity. The results of this study provide a considerable assessment to clarify the suitable frictional coefficient to be used in the finite element contact analysis of antler specimens.
Directory of Open Access Journals (Sweden)
H. Davari
2015-11-01
Full Text Available The emergence of some significant critical approaches and directions in the field of applied linguistics from the mid-1980s onwards has met with various positive and opposite reactions. On the basis of their strength and significance, such approaches and directions have challenged some of the mainstream approaches’ claims, principles and assumptions. Among them, critical applied linguistics can be highlighted as a new approach, developed by the Australian applied linguist, Alastair Pennycook. The aspects, domains and concerns of this new approach were introduced in his book in 2001. Due to the undeniable importance of this approach, as well as partial negligence regarding it in Iranian academic setting, this paper first intends to introduce this approach, as an approach that evaluates various disciplines of applied linguistics through its own specific principles and interests. Then, in order to show its step-by-step application in the evaluation of different disciplines of applied linguistics, with a glance at its significance and appropriateness in Iranian society, two domains, namely English language education and language policy and planning, are introduced and evaluated in order to provide readers with a visible and practical picture of its interdisciplinary nature and evaluative functions. The findings indicate the efficacy of applying this interdisciplinary framework in any language-in-education policy and planning in accordance with the political, social and cultural context of the target society.
Applying lessons from the ecohealth approach to make food ...
International Development Research Centre (IDRC) Digital Library (Canada)
Applying lessons from the ecohealth approach to make food systems healthier ... the biennial Ecohealth Congress of the International Association for Ecology and ... intersectoral policies that address the notable increase in obesity, diabetes, ...
An analytical approach to the positive reactivity void coefficient of TRIGA Mark-II reactor
International Nuclear Information System (INIS)
Edgue, Erdinc; Yarman, Tolga
1988-01-01
Previous calculations of reactivity void coefficient of I.T.U. TRIGA Mark-II Reactor was done by the second author et al. The theoretical predictions were afterwards, checked in this reactor experimentally. In this work an analytical approach is developed to evaluate rather quickly the reactivity void coefficient of I.T.U. TRIGA Mark-II, versus the size of the void inserted into the reactor. It is thus assumed that the reactor is a cylindrical, bare nuclear system. Next a belt of water of 2πrΔrH is introduced axially at a distance r from the center line of the system. r here, is the thickness of the belt, and H is the height of the reactor. The void is described by decreasing the water density in the belt region. A two group diffusion theory is adopted to determine the criticality of our configuration. The space dependency of the group fluxes are, thereby, assumed to be J 0 (2.405 r / R) cos (π Z / H), the same as that associated with the original bare reactor uniformly loaded prior to the change. A perturbation type of approach, thence, furnishes the effect of introducing a void in the belt region. The reactivity void coefficient can, rather surprisingly, be indeed positive. To our knowledge, this fact had not been established, by the supplier. The agreement of our predictions with the experimental results is good. (author)
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
International Nuclear Information System (INIS)
Warncke, D; Lewis, E; Leahy, M; Lochmann, S
2009-01-01
The propagation of light in biological tissue depends on the absorption and reduced scattering coefficient. The aim of this project is the determination of these two optical properties using spatially resolved reflectance measurements. The sensor system consists of five laser sources at different wavelengths, an optical fibre probe and five photodiodes. For these kinds of measurements it has been shown that an often used solution of the diffusion equation can not be applied. Therefore a neural network is being developed to extract the needed optical properties out of the reflectance data. Data sets for the training, validation and testing process are provided by Monte Carlo Simulations.
Regular approach for generating van der Waals C{sub s} coefficients to arbitrary orders
Energy Technology Data Exchange (ETDEWEB)
Ovsiannikov, Vitali D [Department of Physics, Voronezh State University, 394006 Voronezh (Russian Federation); Mitroy, J [Faculty of Technology, Charles Darwin University, Darwin, NT 0909 (Australia)
2006-01-14
A completely general formalism is developed to describe the energy E{sup disp} = {sigma}{sub s}C{sub s}/R{sup s} of dispersion interaction between two atoms in spherically symmetric states. Explicit expressions are given up to the tenth order of perturbation theory for the dispersion energy E{sup disp} and dispersion coefficients C{sub s}. The method could, in principle, be used to derive the expressions for any s while including all contributing orders of perturbation theory for asymptotic interaction between two atoms. The theory is applied to the calculation of the complete series up to s = 30 for two hydrogen atoms in their ground state. A pseudo-state series expansion of the two-atom Green function gives rapid convergence of the series for radial matrix elements. The numerical values of C{sub s} are computed up to C{sub 30} to a relative accuracy of 10{sup -7} or better. The dispersion coefficients for the hydrogen-antihydrogen interaction are obtained from the H-H coefficients by simply taking the absolute magnitude of C{sub s}.
Regular approach for generating van der Waals Cs coefficients to arbitrary orders
International Nuclear Information System (INIS)
Ovsiannikov, Vitali D; Mitroy, J
2006-01-01
A completely general formalism is developed to describe the energy E disp = Σ s C s /R s of dispersion interaction between two atoms in spherically symmetric states. Explicit expressions are given up to the tenth order of perturbation theory for the dispersion energy E disp and dispersion coefficients C s . The method could, in principle, be used to derive the expressions for any s while including all contributing orders of perturbation theory for asymptotic interaction between two atoms. The theory is applied to the calculation of the complete series up to s = 30 for two hydrogen atoms in their ground state. A pseudo-state series expansion of the two-atom Green function gives rapid convergence of the series for radial matrix elements. The numerical values of C s are computed up to C 30 to a relative accuracy of 10 -7 or better. The dispersion coefficients for the hydrogen-antihydrogen interaction are obtained from the H-H coefficients by simply taking the absolute magnitude of C s
Energy demand projection of China using a path-coefficient analysis and PSO–GA approach
International Nuclear Information System (INIS)
Yu Shiwei; Zhu Kejun; Zhang Xian
2012-01-01
Highlights: ► The effect mechanism of China’s energy demand is investigated detailedly. ► A hybrid algorithm PSO–GA optimal energy demands estimating model for China. ► China’s energy demand will reach 4.48 billion tce in 2015. ► The proposed method forecast shows its superiority compared with others. - Abstract: Energy demand projection is fundamental to rational energy planning formulation. The present study investigates the direct and indirect effects of five factors, namely GDP, population, proportion of industrial, proportion of urban population and coal percentage of total energy consumption on China’s energy demand, implementing a path-coefficient analysis. On this basis, a hybrid algorithm, Particle Swarm Optimization and Genetic Algorithm optimal Energy Demand Estimating (PSO–GA EDE) model, is proposed for China. The coefficients of the three forms of the model (linear, exponential and quadratic model) are optimized by proposed PSO–GA. To obtain a combinational prediction of three forms, a departure coefficient method is applied to get the combinational weights. The results show that the China’s energy demand will be 4.48 billion tce in 2015. Furthermore; the proposed method forecast shows its superiority compared with other single optimization method such as GA, PSO or ACO and multiple linear regressions.
Drag Coefficient of Water Droplets Approaching the Leading Edge of an Airfoil
Vargas, Mario; Sor, Suthyvann; Magarino, Adelaida Garcia
2013-01-01
This work presents results of an experimental study on droplet deformation and breakup near the leading edge of an airfoil. The experiment was conducted in the rotating rig test cell at the Instituto Nacional de Tecnica Aeroespacial (INTA) in Madrid, Spain. An airfoil model was placed at the end of the rotating arm and a monosize droplet generator produced droplets that fell from above, perpendicular to the path of the airfoil. The interaction between the droplets and the airfoil was captured with high speed imaging and allowed observation of droplet deformation and breakup as the droplet approached the airfoil near the stagnation line. Image processing software was used to measure the position of the droplet centroid, equivalent diameter, perimeter, area, and the major and minor axes of an ellipse superimposed over the deforming droplet. The horizontal and vertical displacement of each droplet against time was also measured, and the velocity, acceleration, Weber number, Bond number, Reynolds number, and the drag coefficients were calculated along the path of the droplet to the beginning of breakup. Results are presented and discussed for drag coefficients of droplets with diameters in the range of 300 to 1800 micrometers, and airfoil velocities of 50, 70 and 90 meters/second. The effect of droplet oscillation on the drag coefficient is discussed.
Directory of Open Access Journals (Sweden)
Z. Nematollahi
2016-03-01
Full Text Available Introduction: Due to existence of the risk and uncertainty in agriculture, risk management is crucial for management in agriculture. Therefore the present study was designed to determine the risk aversion coefficient for Esfarayens farmers. Materials and Methods: The following approaches have been utilized to assess risk attitudes: (1 direct elicitation of utility functions, (2 experimental procedures in which individuals are presented with hypothetical questionnaires regarding risky alternatives with or without real payments and (3: Inference from observation of economic behavior. In this paper, we focused on approach (3: inference from observation of economic behavior, based on this assumption of existence of the relationship between the actual behavior of a decision maker and the behavior predicted from empirically specified models. A new non-parametric method and the QP method were used to calculate the coefficient of risk aversion. We maximized the decision maker expected utility with the E-V formulation (Freund, 1956. Ideally, in constructing a QP model, the variance-covariance matrix should be formed for each individual farmer. For this purpose, a sample of 100 farmers was selected using random sampling and their data about 14 products of years 2008- 2012 were assembled. The lowlands of Esfarayen were used since within this area, production possibilities are rather homogeneous. Results and Discussion: The results of this study showed that there was low correlation between some of the activities, which implies opportunities for income stabilization through diversification. With respect to transitory income, Ra, vary from 0.000006 to 0.000361 and the absolute coefficient of risk aversion in our sample were 0.00005. The estimated Ra values vary considerably from farm to farm. The results showed that the estimated Ra for the subsample existing of 'non-wealthy' farmers was 0.00010. The subsample with farmers in the 'wealthy' group had an
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.
Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos
2018-02-01
Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
Computing the blood brain barrier (BBB) diffusion coefficient: A molecular dynamics approach
Energy Technology Data Exchange (ETDEWEB)
Shamloo, Amir, E-mail: shamloo@sharif.edu; Pedram, Maysam Z.; Heidari, Hossein; Alasty, Aria, E-mail: aalasti@sharif.edu
2016-07-15
Various physical and biological aspects of the Blood Brain Barrier (BBB) structure still remain unfolded. Therefore, among the several mechanisms of drug delivery, only a few have succeeded in breaching this barrier, one of which is the use of Magnetic Nanoparticles (MNPs). However, a quantitative characterization of the BBB permeability is desirable to find an optimal magnetic force-field. In the present study, a molecular model of the BBB is introduced that precisely represents the interactions between MNPs and the membranes of Endothelial Cells (ECs) that form the BBB. Steered Molecular Dynamics (SMD) simulations of the BBB crossing phenomenon have been carried out. Mathematical modeling of the BBB as an input-output system has been considered from a system dynamics modeling viewpoint, enabling us to analyze the BBB behavior based on a robust model. From this model, the force profile required to overcome the barrier has been extracted for a single NP from the SMD simulations at a range of velocities. Using this data a transfer function model has been obtained and the diffusion coefficient is evaluated. This study is a novel approach to bridge the gap between nanoscale models and microscale models of the BBB. The characteristic diffusion coefficient has the nano-scale molecular effects inherent, furthermore reducing the computational costs of a nano-scale simulation model and enabling much more complex studies to be conducted. - Highlights: • Molecular dynamics simulation of crossing nano-particles through the BBB membrane at different velocities. • Recording the position of nano-particle and the membrane-NP interaction force profile. • Identification of a frequency domain model for the membrane. • Calculating the diffusion coefficient based on MD simulation and identified model. • Obtaining a relation between continuum medium and discrete medium.
Study of the coefficients of internal conversion for transition energies approaching the threshold
International Nuclear Information System (INIS)
Farani Coursol, Nelcy.
1979-01-01
Internal conversion coefficients were determined experimentally with great accuracy for areas of transition energies, which constitute tests for the theories (energies at the most ten kEv above the threshold of K shell), then the results obtained were compared with the values calculated (or to be calculated) from theoretical models. Owing to the difficulties raised by the precise determination of the internal conversion coefficients (ICC), in the first stage we selected radionuclides with a relatively simple decay pattern, the transitions: 30 keV of sup(93m)Nb, 35 keV of sup(125m)Te, 14 keV of 57 Fe and 39 keV of sup(129m)Xe. It was observed that 'problems' exist with respect to the ICC's of the great multipolarity transitions, so the transitions of this kind were examined in a systematic manner. The possibility of penetration effects occurring for the transitions studied experimentally was examined. The considerations are presented which 'authorized' us to disregard the dynamic part of the ICC for the transitions approaching the threshold (L selection rules and life of nuclear levels in relation to Weisskopf-Moszkowski estimations). The Kurie straight line was determined experimentally for the β - transition and the Qsub(β) was evaluated with an important accuracy gain compared with the values available at present. Finally, a certain number of ICC's of transitions already determined with good precision were recalculated, in order to extend our analysis and detect any possible systematic errors [fr
Modeling of Electricity Demand for Azerbaijan: Time-Varying Coefficient Cointegration Approach
Directory of Open Access Journals (Sweden)
Jeyhun I. Mikayilov
2017-11-01
Full Text Available Recent literature has shown that electricity demand elasticities may not be constant over time and this has investigated using time-varying estimation methods. As accurate modeling of electricity demand is very important in Azerbaijan, which is a transitional country facing significant change in its economic outlook, we analyze whether the response of electricity demand to income and price is varying over time in this economy. We employed the Time-Varying Coefficient cointegration approach, a cutting-edge time-varying estimation method. We find evidence that income elasticity demonstrates sizeable variation for the period of investigation ranging from 0.48% to 0.56%. The study has some useful policy implications related to the income and price aspects of the electricity consumption in Azerbaijan.
Transport Coefficients of Fluids
Eu, Byung Chan
2006-01-01
Until recently the formal statistical mechanical approach offered no practicable method for computing the transport coefficients of liquids, and so most practitioners had to resort to empirical fitting formulas. This has now changed, as demonstrated in this innovative monograph. The author presents and applies new methods based on statistical mechanics for calculating the transport coefficients of simple and complex liquids over wide ranges of density and temperature. These molecular theories enable the transport coefficients to be calculated in terms of equilibrium thermodynamic properties, and the results are shown to account satisfactorily for experimental observations, including even the non-Newtonian behavior of fluids far from equilibrium.
A Numerical Approach to Estimate the Ballistic Coefficient of Space Debris from TLE Orbital Data
Narkeliunas, Jonas
2016-01-01
Low Earth Orbit (LEO) is full of space debris, which consist of spent rocket stages, old satellites and fragments from explosions and collisions. As of 2009, more than 21,000 orbital debris larger than 10 cm are known to exist], and while it is hard to track anything smaller than that, the estimated population of particles between 1 and 10 cm in diameter is approximately 500,000, whereas small as 1 cm exceeds 100 million. These objects orbit Earth with huge kinetic energies speeds usually exceed 7 kms. The shape of their orbit varies from almost circular to highly elliptical and covers all LEO, a region in space between 160 and 2,000 km above sea level. Unfortunately, LEO is also the place where most of our active satellites are situated, as well as, International Space Station (ISS) and Hubble Space Telescope, whose orbits are around 400 and 550 km above sea level, respectively.This poses a real threat as debris can collide with satellites and deal substantial damage or even destroy them.Collisions between two or more debris create clouds of smaller debris, which are harder to track and increase overall object density and collision probability. At some point, the debris density couldthen reach a critical value, which would start a chain reaction and the number of space debris would grow exponentially. This phenomenon was first described by Kessler in 1978 and he concluded that it would lead to creation of debris belt, which would vastly complicate satellite operations in LEO. The debris density is already relatively high, as seen from several necessary debris avoidance maneuvers done by Shuttle, before it was discontinued, and ISS. But not all satellites have a propulsion system to avoid collision, hence different methods need to be applied. One of the proposed collision avoidance concepts is called LightForce and it suggests using photon pressure to induce small orbital corrections to deflect debris from colliding. This method is very efficient as seen from
The hybrid thermography approach applied to architectural structures
Sfarra, S.; Ambrosini, D.; Paoletti, D.; Nardi, I.; Pasqualoni, G.
2017-07-01
This work contains an overview of infrared thermography (IRT) method and its applications relating to the investigation of architectural structures. In this method, the passive approach is usually used in civil engineering, since it provides a panoramic view of the thermal anomalies to be interpreted also thanks to the use of photographs focused on the region of interest (ROI). The active approach, is more suitable for laboratory or indoor inspections, as well as for objects having a small size. The external stress to be applied is thermal, coming from non-natural apparatus such as lamps or hot / cold air jets. In addition, the latter permits to obtain quantitative information related to defects not detectable to the naked eyes. Very recently, the hybrid thermography (HIRT) approach has been introduced to the attention of the scientific panorama. It can be applied when the radiation coming from the sun, directly arrives (i.e., possibly without the shadow cast effect) on a surface exposed to the air. A large number of thermograms must be collected and a post-processing analysis is subsequently applied via advanced algorithms. Therefore, an appraisal of the defect depth can be obtained passing through the calculation of the combined thermal diffusivity of the materials above the defect. The approach is validated herein by working, in a first stage, on a mosaic sample having known defects while, in a second stage, on a Church built in L'Aquila (Italy) and covered with a particular masonry structure called apparecchio aquilano. The results obtained appear promising.
Modeling the Radar Return of Powerlines Using an Incremental Length Diffraction Coefficient Approach
Macdonald, Douglas
DIRSIG consistently underestimated the scattered return, especially away from specular observation angles. This underestimation was particularly pronounced for the dihedral targets which have a low acceptance angle in elevation, probably caused by the lack of a physical optics capability in DIRSIG. Powerlines were not apparent in the simulated data. For modeling powerlines outside of DIRSIG using a standalone approach, an Incremental Length Diffraction Coefficient (ILDC) method was used. Traditionally, this method is used to model the scattered radiation from the edge of a wedge, for example the edges on the wings of a stealth aircraft. The Physical Theory of Diffraction provides the 2D diffraction coefficient and the ILDC method performs an integral along the edge to extend this solution to three dimensions. This research takes the ILDC approach but instead of using the wedge diffraction coefficient, the exact far-field diffraction coefficient for scattering from a finite length cylinder is used. Wavenumber-diameter products are limited to less than or about 10. For typical powerline diameters, this translates to X-band frequencies and lower. The advantage of this method is it allows exact 2D solutions to be extended to powerline geometries where sag is present and it is shown to be more accurate than a pure physical optics approach for frequencies lower than millimeter wave. The Radar Cross Sections produced by this method were accurate to within the experimental uncertainty of measured RF anechoic chamber data for both X and C-band frequencies across an 80 degree arc for 5 different target types and diameters. For the X-band data, the mean error was 6.0% for data with 9.5% measurement uncertainty. For the C-band data, the mean error was 11.8% for data with 14.3% measurement uncertainty. The best results were obtained for X-band data in the HH polarization channel within a 20 degree arc about normal incidence. For this configuration, a mean error of 3.0% for data with
CSIR Research Space (South Africa)
Taylor, NJ
2017-02-01
Full Text Available , such as the United States of America, Mexico, South Africa and Australia (INC, 2011), the majority of pecan research has been conducted in the USA. Studies conducted in New Mexico suggest that seasonal crop evapotranspiration (ET) of flood irrigated... coefficient modelling approach to estimate water use of pecans Evapotranspiration was estimated using a pecan specific model from New Mexico (Samani et al., 2011) which relates crop coefficients and orchard water use to canopy cover as follows...
Applying a Problem Based Learning Approach to Land Management Education
DEFF Research Database (Denmark)
Enemark, Stig
Land management covers a wide range activities associated with the management of land and natural resources that are required to fulfil political objectives and achieve sustainable development. This paper presents an overall understanding of the land management paradigm and the benefits of good...... land governance to society. A land administration system provides a country with the infrastructure to implement land-related policies and land management strategies. By applying this land management profile to surveying education, this paper suggests that there is a need to move away from an exclusive...... engineering focus toward adopting an interdisciplinary and problem-based approach to ensure that academic programmes can cope with the wide range of land administration functions and challenges. An interdisciplinary approach to surveying education calls for the need to address issues and problems in a real...
Pellis, E.P.M.; Franssen-Hal, van N.L.W.; Burema, J.; Keijer, J.
2003-01-01
We show that the intraclass correlation coefficient (ICC) can be used as a relatively simple statistical measure to assess methodological and biological variation in DNA microarray analysis. The ICC is a measure that determines the reproducibility of a variable, which can easily be calculated from
Geopotential coefficient determination and the gravimetric boundary value problem: A new approach
Sjoeberg, Lars E.
1989-01-01
New integral formulas to determine geopotential coefficients from terrestrial gravity and satellite altimetry data are given. The formulas are based on the integration of data over the non-spherical surface of the Earth. The effect of the topography to low degrees and orders of coefficients is estimated numerically. Formulas for the solution of the gravimetric boundary value problem are derived.
International Nuclear Information System (INIS)
Shaltout, A.
2003-06-01
The present work describes some actual problems of quantitative x-ray fluorescence analysis by means of the fundamental parameter approach. To perform this task, some of the main parameters are discussed in detail. These parameters are photoelectric cross sections, coherent and incoherent scattering cross sections, mass absorption cross sections and the variation of the x-ray tube voltage. Photoelectric cross sections, coherent and incoherent scattering cross sections and mass absorption cross sections in the energy range from 1 to 300 keV for the elements from Z=1 to 94 considering ten different data bases are studied. These are data bases given by Hubbell, McMaster, Mucall, Scofield, Xcom, Elam, Sasaki, Henke, Cullen and Chantler's data bases. These data bases have been developed also for an application in fundamental parameter programs for quantitative x-ray analysis (Energy Dispersive X-Ray Fluorescence Analysis (EDXRFA), Electron Probe Microanalysis (EPMA), X-Ray Photoelectron Spectroscopy (XPS) and Total Electron Yield (TEY)). In addition a comparison is performed between different data bases. In McMaster's data base, the missing elements (Z=84, 85, 87, 88, 89, 91, and 93) are added by using photoelectric cross sections of Scofield's data base, coherent as well as incoherent scattering cross sections of Elam's data base and the absorption edges of Bearden. Also, the N-fit coefficients of the elements from Z=61 to 69 are wrong in McMaster data base, therefore, linear least squares fits are used to recalculate the N-fit coefficients of these elements. Additionally, in the McMaster tables the positions of the M- and N-edges of all elements with the exception of the M1- and N1- edges are not defined as well as the jump ratio of the edges. In the present work, the M- and N-edges and the related jump ratios are calculated. To include the missing N-edges, Bearden's values of energy edges are used. In Scofield's data base, modifications include check and correction
Setting research priorities by applying the combined approach matrix.
Ghaffar, Abdul
2009-04-01
Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.
Applying a Modified Triad Approach to Investigate Wastewater lines
International Nuclear Information System (INIS)
Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.
2006-01-01
Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)
Stabilized determination of geopotential coefficients by the mixed hom-BLUP approach
Middel, B.; Schaffrin, B.
1989-01-01
For the determination of geopotential coefficients, data can be used from rather different sources, e.g., satellite tracking, gravimetry, or altimetry. As each data type is particularly sensitive to certain wavelengths of the spherical harmonic coefficients it is of essential importance how they are treated in a combination solution. For example the longer wavelengths are well described by the coefficients of a model derived by satellite tracking, while other observation types such as gravity anomalies, delta g, and geoid heights, N, from altimetry contain only poor information for these long wavelengths. Therefore, the lower coefficients of the satellite model should be treated as being superior in the combination. In the combination a new method is presented which turns out to be highly suitable for this purpose due to its great flexibility combined with robustness.
2009-02-03
computational approach to accommodation coefficients and its application to noble gases on aluminum surface Nathaniel Selden Uruversity of Southern Cahfornia, Los ...8217 ,. 0.’ a~ .......,..,P. • " ,,-0, "p"’U".. ,Po"D.’ 0.’P.... uro . P." FIG. 5: Experimental and computed radiometri~ force for argon (left), xenon
Zhang, Lin; Smart, Sonja; Sandrin, Todd R
2015-11-05
MALDI-TOF MS profiling has been shown to be a rapid and reliable method to characterize pure cultures of bacteria. Currently, there is keen interest in using this technique to identify bacteria in mixtures. Promising results have been reported with two- or three-isolate model systems using biomarker-based approaches. In this work, we applied MALDI-TOF MS-based methods to a more complex model mixture containing six bacteria. We employed: 1) a biomarker-based approach that has previously been shown to be useful in identification of individual bacteria in pure cultures and simple mixtures and 2) a similarity coefficient-based approach that is routinely and nearly exclusively applied to identification of individual bacteria in pure cultures. Both strategies were developed and evaluated using blind-coded mixtures. With regard to the biomarker-based approach, results showed that most peaks in mixture spectra could be assigned to those found in spectra of each component bacterium; however, peaks shared by two isolates as well as peaks that could not be assigned to any individual component isolate were observed. For two-isolate blind-coded samples, bacteria were correctly identified using both similarity coefficient- and biomarker-based strategies, while for blind-coded samples containing more than two isolates, bacteria were more effectively identified using a biomarker-based strategy.
Furmanchuk, Al'ona; Saal, James E; Doak, Jeff W; Olson, Gregory B; Choudhary, Alok; Agrawal, Ankit
2018-02-05
The regression model-based tool is developed for predicting the Seebeck coefficient of crystalline materials in the temperature range from 300 K to 1000 K. The tool accounts for the single crystal versus polycrystalline nature of the compound, the production method, and properties of the constituent elements in the chemical formula. We introduce new descriptive features of crystalline materials relevant for the prediction the Seebeck coefficient. To address off-stoichiometry in materials, the predictive tool is trained on a mix of stoichiometric and nonstoichiometric materials. The tool is implemented into a web application (http://info.eecs.northwestern.edu/SeebeckCoefficientPredictor) to assist field scientists in the discovery of novel thermoelectric materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sadeghi, Arman
2018-03-01
Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.
Energy Technology Data Exchange (ETDEWEB)
Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)
1989-12-01
In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.
Helbich, M; Griffith, D
2016-01-01
Real estate policies in urban areas require the recognition of spatial heterogeneity in housing prices to account for local settings. In response to the growing number of spatially varying coefficient models in housing applications, this study evaluated four models in terms of their spatial patterns
Boermans, M.A.; Kattenberg, M.A.C.
2011-01-01
We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing
Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research
Bardzell, Michael; Poimenidou, Eirini
2011-01-01
In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…
Santos, M V; Sansinena, M; Zaritzky, N; Chirife, J
BACKGROUND: Dry ice-ethanol bath (-78 degree C) have been widely used in low temperature biological research to attain rapid cooling of samples below freezing temperature. The prediction of cooling rates of biological samples immersed in dry ice-ethanol bath is of practical interest in cryopreservation. The cooling rate can be obtained using mathematical models representing the heat conduction equation in transient state. Additionally, at the solid cryogenic-fluid interface, the knowledge of the surface heat transfer coefficient (h) is necessary for the convective boundary condition in order to correctly establish the mathematical problem. The study was to apply numerical modeling to obtain the surface heat transfer coefficient of a dry ice-ethanol bath. A numerical finite element solution of heat conduction equation was used to obtain surface heat transfer coefficients from measured temperatures at the center of polytetrafluoroethylene and polymethylmetacrylate cylinders immersed in a dry ice-ethanol cooling bath. The numerical model considered the temperature dependence of thermophysical properties of plastic materials used. A negative linear relationship is observed between cylinder diameter and heat transfer coefficient in the liquid bath, the calculated h values were 308, 135 and 62.5 W/(m 2 K) for PMMA 1.3, PTFE 2.59 and 3.14 cm in diameter, respectively. The calculated heat transfer coefficients were consistent among several replicates; h in dry ice-ethanol showed an inverse relationship with cylinder diameter.
Views on Montessori Approach by Teachers Serving at Schools Applying the Montessori Approach
Atli, Sibel; Korkmaz, A. Merve; Tastepe, Taskin; Koksal Akyol, Aysel
2016-01-01
Problem Statement: Further studies on Montessori teachers are required on the grounds that the Montessori approach, which, having been applied throughout the world, holds an important place in the alternative education field. Yet it is novel for Turkey, and there are only a limited number of studies on Montessori teachers in Turkey. Purpose of…
Energy Technology Data Exchange (ETDEWEB)
Chakraborty, Prodyut R., E-mail: pchakraborty@iitj.ac.in [Department of Mechanical Engineering, Indian Institute of Technology Jodhpur, 342011 (India); Hiremath, Kirankumar R., E-mail: k.r.hiremath@iitj.ac.in [Department of Mathematics, Indian Institute of Technology Jodhpur, 342011 (India); Sharma, Manvendra, E-mail: PG201283003@iitj.ac.in [Defence Laboratory Jodhpur, Defence Research & Development Organisation, 342011 (India)
2017-02-05
Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.
International Nuclear Information System (INIS)
Chakraborty, Prodyut R.; Hiremath, Kirankumar R.; Sharma, Manvendra
2017-01-01
Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.
Shaw, Jacob T.; Lidster, Richard T.; Cryer, Danny R.; Ramirez, Noelia; Whiting, Fiona C.; Boustead, Graham A.; Whalley, Lisa K.; Ingham, Trevor; Rickard, Andrew R.; Dunmore, Rachel E.; Heard, Dwayne E.; Lewis, Ally C.; Carpenter, Lucy J.; Hamilton, Jacqui F.; Dillon, Terry J.
2018-03-01
Gas-phase rate coefficients are fundamental to understanding atmospheric chemistry, yet experimental data are not available for the oxidation reactions of many of the thousands of volatile organic compounds (VOCs) observed in the troposphere. Here, a new experimental method is reported for the simultaneous study of reactions between multiple different VOCs and OH, the most important daytime atmospheric radical oxidant. This technique is based upon established relative rate concepts but has the advantage of a much higher throughput of target VOCs. By evaluating multiple VOCs in each experiment, and through measurement of the depletion in each VOC after reaction with OH, the OH + VOC reaction rate coefficients can be derived. Results from experiments conducted under controlled laboratory conditions were in good agreement with the available literature for the reaction of 19 VOCs, prepared in synthetic gas mixtures, with OH. This approach was used to determine a rate coefficient for the reaction of OH with 2,3-dimethylpent-1-ene for the first time; k = 5.7 (±0.3) × 10-11 cm3 molecule-1 s-1. In addition, a further seven VOCs had only two, or fewer, individual OH rate coefficient measurements available in the literature. The results from this work were in good agreement with those measurements. A similar dataset, at an elevated temperature of 323 (±10) K, was used to determine new OH rate coefficients for 12 aromatic, 5 alkane, 5 alkene and 3 monoterpene VOC + OH reactions. In OH relative reactivity experiments that used ambient air at the University of York, a large number of different VOCs were observed, of which 23 were positively identified. Due to difficulties with detection limits and fully resolving peaks, only 19 OH rate coefficients were derived from these ambient air samples, including 10 reactions for which data were previously unavailable at the elevated reaction temperature of T = 323 (±10) K.
Transport coefficients of Quark-Gluon Plasma in a Kinetic Theory approach
International Nuclear Information System (INIS)
Puglisi, A; Plumari, S; Scardina, F; Greco, V
2014-01-01
One of the main results of heavy ions collision at relativistic energy experiments is the very small shear viscosity to entropy density ratio of the Quark-Gluon Plasma, close to the conjectured lower bound η/s = 1/4π for systems in the infinite coupling limit. Transport coefficients like shear viscosity are responsible of non-equilibrium properties of a system: Green- Kubo relations give us an exact expression to compute these coefficients. We computed shear viscosity numerically using Green-Kubo relation in the framework of Kinetic Theory solving the relativistic transport Boltzmann equation in a finite box with periodic boundary conditions. We investigated different cases of particles, for one component system (gluon matter), interacting via isotropic or anisotropic cross-section in the range of temperature of interest for HIC. Green-Kubo results are in agreement with Chapman-Enskog approximation while Relaxation Time approximation can underestimates the viscosity of a factor 2. Another transport coefficient of interest is the electric conductivity σ el which determines the response of QGP to the electromagnetic fields present in the early stage of the collision. We study the σ el dependence on microscopic details of interaction and we find also in this case that Relaxation Time Approximation is a good approximation only for isotropic cross-section.
Zedler, Sarah
2013-08-01
We seek to determine whether a small number of measurements of upper ocean temperature and currents can be used to make estimates of the drag coefficient that have a smaller range of uncertainty than previously found. We adopt a numerical approach in an inverse problem setup using an ocean model and its adjoint, to assimilate data and to adjust the drag coefficient parameterization (here the free parameter) with wind speed that corresponds to the minimum of a model minus data misfit or cost function. Pseudo data are generated from a reference forward simulation, and are perturbed with different levels of Gaussian distributed noise. It is found that it is necessary to assimilate both surface current speed and temperature data to obtain improvement over previous estimates of the drag coefficient. When data is assimilated without any smoothing or constraints on the solution, the drag coefficient is overestimated at low wind speeds and there are unrealistic, high frequency oscillations in the adjusted drag coefficient curve. When second derivatives of the drag coefficient curve are penalized and the solution is constrained to experimental values at low wind speeds, the adjusted drag coefficient is within 10% of its target value. This result is robust to the addition of realistic random noise meant to represent turbulence due to the presence of mesoscale background features in the assimilated data, or to the wind speed time series to model its unsteady and gusty character. When an eddy is added to the background flow field in both the initial condition and the assimilated data time series, the target and adjusted drag coefficient are within 10% of one another, regardless of whether random noise is added to the assimilated data. However, when the eddy is present in the assimilated data but is not in the initial conditions, the drag coefficient is overestimated by as much as 30%. This carries the implication that when real data is assimilated, care needs to be taken in
Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette
2011-09-01
In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.
Shternin, P. S.; Baldo, M.; Schulze, H.-J.
2017-12-01
Thermal conductivity and shear viscosity of npeµ matter in non-superfluid neutron star cores are considered in the framework of Brueckner-Hartree-Fock many-body theory. We extend our previous work (Shternin et al 2013 PRC 88 065803) by analysing different nucleon-nucleon potentials and different three-body forces. We find that the use of different potentials leads up to one order of magnitude variations in the values of the nucleon contribution to transport coefficients. The nucleon contribution dominates the thermal conductivity, but for all considered models the shear viscosity is dominated by leptons.
International Nuclear Information System (INIS)
Rey Silva, D.V.F.M.; Oliveira, A.P.; Macacini, J.F.; Da Silva, N.C.; Cipriani, M.; Quinelato, A.L.
2005-01-01
Full text of publication follows: The study of the dispersion of radioactive materials in soils and in engineering barriers plays an important role in the safety analysis of nuclear waste repositories. In order to proceed with such kind of study the involved physical properties must be determined with precision, including the apparent mass diffusion coefficient, which is defined as the ratio between the effective mass diffusion coefficient and the retardation factor. Many different experimental and estimation techniques are available on the literature for the identification of the diffusion coefficient and this work describes the implementation of that developed by Pereira et al [1]. This technique is based on non-intrusive radiation measurements and the experimental setup consists of a cylindrical column filled with compacted media saturated with water. A radioactive contaminant is mixed with a portion of the media and then placed in the bottom of the column. Therefore, the contaminant will diffuse through the uncontaminated media due to the concentration gradient. A radiation detector is used to measure the number of counts, which is associated to the contaminant concentration, at several positions along the column during the experiment. Such measurements are then used to estimate the apparent diffusion coefficient of the contaminant in the porous media by inverse analysis. The inverse problem of parameter estimation is solved with the Levenberg-Marquart Method of minimization of the least-square norm. The experiment was optimized with respect to the number of measurement locations, frequency of measurements and duration of the experiment through the analysis of the sensitivity coefficients and by using a D-optimum approach. This setup is suitable for studying a great number of combinations of diverse contaminants and porous media varying in composition and compacting, with considerable easiness and reliable results, and it was chosen because that is the
Towards a capability approach to careers: Applying Amartya Sen's thinking
Robertson, Peter.
2015-01-01
Amartya Sen’s capability approach characterizes an individual’s well-being in terms of what they are able to be, and what they are able to do. This framework for thinking has many commonalities with the core ideas in career guidance. Sen’s approach is abstract and not in itself a complete or explanatory theory, but a case can be made that the capability approach has something to offer career theory when combined with a life-career developmental approach. It may also suggest ways of working th...
A composite approach boosts transduction coefficients of piezoceramics for energy harvesting
Yu, Xiaole; Hou, Yudong; Zheng, Mupeng; Zhao, Haiyan; Zhu, Mankang
2018-03-01
Piezoelectric energy harvesting is a hotspot in the field of new energy, the core goal of which is to prepare piezoceramics with a high transduction coefficient (d33×g33). The traditional solid-solution design strategy usually causes the same variation trend of d33 and ɛr, resulting in a low d33×g33 value. In this work, a composite design strategy was proposed that uses PZN-PZT/ZnAl2O4 as an example. By introducing ZnAl2O4, which is nonferroelectric with low ɛr, to the PZN-PZT piezoelectric matrix, ɛr decreased rapidly while d33 remained relatively stable. This behavior was ascribed to the increase of Q33 caused by an interfacial effect facilitating the formation of micro-domain structure.
A composite approach boosts transduction coefficients of piezoceramics for energy harvesting
Directory of Open Access Journals (Sweden)
Xiaole Yu
2018-03-01
Full Text Available Piezoelectric energy harvesting is a hotspot in the field of new energy, the core goal of which is to prepare piezoceramics with a high transduction coefficient (d33×g33. The traditional solid–solution design strategy usually causes the same variation trend of d33 and εr, resulting in a low d33×g33 value. In this work, a composite design strategy was proposed that uses PZN–PZT/ZnAl2O4 as an example. By introducing ZnAl2O4, which is nonferroelectric with low εr, to the PZN–PZT piezoelectric matrix, εr decreased rapidly while d33 remained relatively stable. This behavior was ascribed to the increase of Q33 caused by an interfacial effect facilitating the formation of micro-domain structure.
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Energy Technology Data Exchange (ETDEWEB)
Torrecilla, Jose S., E-mail: jstorre@quim.ucm.es [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain); Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain)
2011-03-04
The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.
International Nuclear Information System (INIS)
Torrecilla, Jose S.; Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco
2011-01-01
The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.
Directory of Open Access Journals (Sweden)
J. T. Shaw
2018-03-01
Full Text Available Gas-phase rate coefficients are fundamental to understanding atmospheric chemistry, yet experimental data are not available for the oxidation reactions of many of the thousands of volatile organic compounds (VOCs observed in the troposphere. Here, a new experimental method is reported for the simultaneous study of reactions between multiple different VOCs and OH, the most important daytime atmospheric radical oxidant. This technique is based upon established relative rate concepts but has the advantage of a much higher throughput of target VOCs. By evaluating multiple VOCs in each experiment, and through measurement of the depletion in each VOC after reaction with OH, the OH + VOC reaction rate coefficients can be derived. Results from experiments conducted under controlled laboratory conditions were in good agreement with the available literature for the reaction of 19 VOCs, prepared in synthetic gas mixtures, with OH. This approach was used to determine a rate coefficient for the reaction of OH with 2,3-dimethylpent-1-ene for the first time; k = 5.7 (±0.3 × 10−11 cm3 molecule−1 s−1. In addition, a further seven VOCs had only two, or fewer, individual OH rate coefficient measurements available in the literature. The results from this work were in good agreement with those measurements. A similar dataset, at an elevated temperature of 323 (±10 K, was used to determine new OH rate coefficients for 12 aromatic, 5 alkane, 5 alkene and 3 monoterpene VOC + OH reactions. In OH relative reactivity experiments that used ambient air at the University of York, a large number of different VOCs were observed, of which 23 were positively identified. Due to difficulties with detection limits and fully resolving peaks, only 19 OH rate coefficients were derived from these ambient air samples, including 10 reactions for which data were previously unavailable at the elevated reaction temperature of T = 323 (±10 K.
Molecular dynamics simulations for transport coefficients of liquid argon: new approaches
International Nuclear Information System (INIS)
Lee, Song Hi; Park, Dong Kue; Kang, Dae Bok
2003-01-01
The stress and the heat-flux auto-correlation functions in the Green-Kubo formulas for shear viscosity and thermal conductivity have non-decaying long-time tails. This problem can be overcome by improving the statistical accuracy by N (number of particles)times, considering the stress and the heat-flux of the system as properties of each particle. The mean square stress and the heat-flux displacements in the Einstein formulas for shear viscosity and thermal conductivity are non linear functions of time since the quantities in the mean square stress and the heat-flux displacements are not continuous under periodic boundary conditions. An alternative to these quantities is to integrate the stress and the heat-flux with respect to time, but the resulting mean square stress and heat-flux displacements are still not linear versus time. This problem can be also overcome by improving the statistical accuracy. The results for transport coefficients of liquid argon obtained are discussed
Directory of Open Access Journals (Sweden)
Qunli Wu
2017-01-01
Full Text Available Path-coefficient analysis is utilized to investigate the direct and indirect effects of economic growth, population growth, urbanization rate, industrialization level, and carbon intensity on electricity demand of China. To improve the projection accuracy of electricity demand, this study proposes a hybrid bat algorithm, Gaussian perturbations, and simulated annealing (BAG-SA optimization method. The proposed BAG-SA algorithm not only inherits the simplicity and efficiency of the standard BA with a capability of searching for global optimality but also enhances local search ability and speeds up the global convergence rate. The BAG-SA algorithm is employed to optimize the coefficients of the multiple linear and quadratic forms of electricity demand estimation model. Results indicate that the proposed algorithm has higher precision and reliability than the coefficients optimized by other single-optimization methods, such as genetic algorithm, particle swarm optimization algorithm, or bat algorithm. And the quadratic form of BAG-SA electricity demand estimation model has better fitting ability compared with the multiple linear form of the model. Therefore, the quadratic form of the model is applied to estimate electricity demand of China from 2016 to 2030. The findings of this study demonstrate that China’s electricity demand will reach 14925200 million KWh in 2030.
Applying Digital Sensor Technology: A Problem-Solving Approach
Seedhouse, Paul; Knight, Dawn
2016-01-01
There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…
An Applied Project-Driven Approach to Undergraduate Research Experiences
Karls, Michael A.
2017-01-01
In this paper I will outline the process I have developed for conducting applied mathematics research with undergraduates and give some examples of the projects we have worked on. Several of these projects have led to refereed publications that could be used to illustrate topics taught in the undergraduate curriculum.
DEFF Research Database (Denmark)
Ramezani, Malek; Golestan, Saeed; Li, Shuhui
2018-01-01
In recent years, a large number of three-phase phase-locked loops (PLLs) have been developed. One of the most popular ones is the complex coefficient filterbased PLL (CCF-PLL). The CCFs benefit from a sequence selective filtering ability and, hence, enable the CCF-PLL to selectively reject/extract...... disturbances before the PLL control loop while maintaining an acceptable dynamic behavior. The aim of this paper is presenting a simple yet effective approach to enhance the standard CCF-PLL performance without requiring any additional computational load....
A Multiobjective Approach Applied to the Protein Structure Prediction Problem
2002-03-07
local conformations [38]. Moreover, all these models have the same theme in trying to define the properties a real protein has when folding. Today , it...attempted to solve the PSP problem with a real valued GA and found better results than a competitor (Scheraga, et al) [50]; however, today we know that...ACM Symposium on Applied computing (SAC01) (March 11-14 2001). Las Vegas, Nevada. [22] Derrida , B. “Random Energy Model: Limit of a Family of
Applied approach slab settlement research, design/construction : final report.
2013-08-01
Approach embankment settlement is a pervasive problem in Oklahoma and many other states. The bump and/or abrupt slope change poses a danger to traffic and can cause increased dynamic loads on the bridge. Frequent and costly maintenance may be needed ...
Tennis: Applied Examples of a Game-Based Teaching Approach
Crespo, Miguel; Reid, Machar M.; Miley, Dave
2004-01-01
In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…
Energy Technology Data Exchange (ETDEWEB)
Yesilgul, U., E-mail: uyesilgul@cumhuriyet.edu.tr [Cumhuriyet University, Physics Department, 58140 Sivas (Turkey); Ungan, F. [Cumhuriyet University, Physics Department, 58140 Sivas (Turkey); Sakiroglu, S. [Dokuz Eylül University, Physics Department, 35160 Buca, İzmir (Turkey); Mora-Ramos, M.E. [Facultad de Ciencias Universidad Autonoma del Estado de Morelos, Ave. Universidad 1001, C.P. 62209 Cuernavaca, Morelos (Mexico); Duque, C.A. [Grupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín (Colombia); Kasapoglu, E.; Sarı, H. [Cumhuriyet University, Physics Department, 58140 Sivas (Turkey); Sökmen, I. [Dokuz Eylül University, Physics Department, 35160 Buca, İzmir (Turkey)
2014-01-15
The effects of the intense high-frequency laser field on the optical absorption coefficients and the refractive index changes in a GaAs/GaAlAs parabolic quantum well under the applied electric field have been investigated theoretically. The electron energy levels and the envelope wave functions of the parabolic quantum well are calculated within the effective mass approximation. Analytical expressions for optical properties are obtained using the compact density-matrix approach. The numerical results show that the intense high-frequency laser field has a large effect on the optical characteristics of these structures. Also we can observe that the refractive index and absorption coefficient changes are very sensitive to the electric field in large dimension wells. Thus, this result gives a new degree of freedom in the optoelectronic device applications. -- Highlights: • ILF has a large effect on the optical properties of parabolic quantum wells. • The total absorption coefficients increase as the ILF increases. • The RICs increase as the ILF increases.
Major accident prevention through applying safety knowledge management approach.
Kalatpour, Omid
2016-01-01
Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.
A Multi-Criterion Evolutionary Approach Applied to Phylogenetic Reconstruction
Cancino, W.; Delbem, A.C.B.
2010-01-01
In this paper, we proposed an MOEA approach, called PhyloMOEA which solves the phylogenetic inference problem using maximum parsimony and maximum likelihood criteria. The PhyloMOEA's development was motivated by several studies in the literature (Huelsenbeck, 1995; Jin & Nei, 1990; Kuhner & Felsenstein, 1994; Tateno et al., 1994), which point out that various phylogenetic inference methods lead to inconsistent solutions. Techniques using parsimony and likelihood criteria yield to different tr...
A new kinetic biphasic approach applied to biodiesel process intensification
Energy Technology Data Exchange (ETDEWEB)
Russo, V.; Tesser, R.; Di Serio, M.; Santacesaria, E. [Naples Univ. (Italy). Dept. of Chemistry
2012-07-01
Many different papers have been published on the kinetics of the transesterification of vegetable oil with methanol, in the presence of alkaline catalysts to produce biodiesel. All the proposed approaches are based on the assumption of a pseudo-monophasic system. The consequence of these approaches is that some experimental aspects cannot be described. For the reaction performed in batch conditions, for example, the monophasic approach is not able to reproduce the different plateau obtained by using different amount of catalyst or the induction time observed at low stirring rates. Moreover, it has been observed by operating in continuous reactors that micromixing has a dramatic effect on the reaction rate. At this purpose, we have recently observed that is possible to obtain a complete conversion to biodiesel in less than 10 seconds of reaction time. This observation is confirmed also by other authors using different types of reactors like: static mixers, micro-reactors, oscillatory flow reactors, cavitational reactors, microwave reactors or centrifugal contactors. In this work we will show that a recently proposed biphasic kinetic approach is able to describe all the aspects before mentioned that cannot be described by the monophasic kinetic model. In particular, we will show that the biphasic kinetic model can describe both the induction time observed in the batch reactors, at low stirring rate, and the very high conversions obtainable in a micro-channel reactor. The adopted biphasic kinetic model is based on a reliable reaction mechanism that will be validated by the experimental evidences reported in this work. (orig.)
The applying stakeholder approach to strategic management of territories development
Directory of Open Access Journals (Sweden)
Ilshat Azamatovich Tazhitdinov
2013-06-01
Full Text Available In the paper, the aspects of the strategic management of socioeconomic development of territories in terms of stakeholder approach are discussed. The author's interpretation of the concept of stakeholder sub-region is proposed, and their classification into internal and external to the territorial socioeconomic system of sub-regional level is offered. The types of interests and types of resources stakeholders in the sub-region are identified, and at the same time the correlation of interests and resources allows to determine the groups (alliances stakeholders, which ensure the balance of interests depending on the certain objectives of the association. The conceptual stakeholder agent model of management of strategic territorial development within the hierarchical system of «region — sub-region — municipal formation,» is proposed. All stakeholders there are considered as the influence agents directing its own resources to provide a comprehensive approach to management territorial development. The interaction between all the influence agents of the «Region — Sub-region — municipal formation» is provided vertically and horizontally through the initialization of the development and implementation of strategic documents of the sub-region. Vertical interaction occurs between stakeholders such as government and municipal authorities being as a guideline, and the horizontal — between the rests of them being as a partnership. Within the proposed model, the concurrent engineering is implemented, which is a form of inter-municipal strategic cooperation of local government municipalities for the formation and analyzing a set of alternatives of the project activities in the sub-region in order to choose the best options. The proposed approach was tested in the development of medium-term comprehensive program of socioeconomic development of the Zauralye and sub-regions of the North-East of the Republic of Bashkortostan (2011–2015.
Modeling in applied sciences a kinetic theory approach
Pulvirenti, Mario
2000-01-01
Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...
Cortical complexity in bipolar disorder applying a spherical harmonics approach.
Nenadic, Igor; Yotter, Rachel A; Dietzek, Maren; Langbein, Kerstin; Sauer, Heinrich; Gaser, Christian
2017-05-30
Recent studies using surface-based morphometry of structural magnetic resonance imaging data have suggested that some changes in bipolar disorder (BP) might be neurodevelopmental in origin. We applied a novel analysis of cortical complexity based on fractal dimensions in high-resolution structural MRI scans of 18 bipolar disorder patients and 26 healthy controls. Our region-of-interest based analysis revealed increases in fractal dimensions (in patients relative to controls) in left lateral orbitofrontal cortex and right precuneus, and decreases in right caudal middle frontal, entorhinal cortex, and right pars orbitalis, and left fusiform and posterior cingulate cortices. While our analysis is preliminary, it suggests that early neurodevelopmental pathologies might contribute to bipolar disorder, possibly through genetic mechanisms. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Ungaro Fabrizio
2014-03-01
Full Text Available Soil sealing is the permanent covering of the land surface by buildings, infrastructures or any impermeable artificial material. Beside the loss of fertile soils with a direct impact on food security, soil sealing modifies the hydrological cycle. This can cause an increased flooding risk, due to urban development in potential risk areas and to the increased volumes of runoff. This work estimates the increase of runoff due to sealing following urbanization and land take in the plain of Emilia Romagna (Italy, using the Green and Ampt infiltration model for two rainfall return periods (20 and 200 years in two different years, 1976 and 2008. To this goal a hydropedological approach was adopted in order to characterize soil hydraulic properties via locally calibrated pedotransfer functions (PTF. PTF inputs were estimated via sequential Gaussian simulations coupled with a simple kriging with varying local means, taking into account soil type and dominant land use. Results show that in the study area an average increment of 8.4% in sealed areas due to urbanization and sprawl induces an average increment in surface runoff equal to 3.5 and 2.7% respectively for 20 and 200-years return periods, with a maximum > 20% for highly sealed coast areas.
Undiscovered resource evaluation: Towards applying a systematic approach to uranium
International Nuclear Information System (INIS)
Fairclough, M.; Katona, L.
2014-01-01
Evaluations of potential mineral resource supply range from spatial to aspatial, and everything in between across a range of scales. They also range from qualitative to quantitative with similar hybrid examples across the spectrum. These can compromise detailed deposit-specific reserve and resource calculations, target generative processes and estimates of potential endowments in a broad geographic or geological area. All are estimates until the ore has been discovered and extracted. Contemporary national or provincial scale evaluations of mineral potential are relatively advanced and some include uranium, such as those for South Australia undertaken by the State Geological Survey. These play an important role in land-use planning as well as attracting exploration investment and range from datato knowledge-driven approaches. Studies have been undertaken for the Mt Painter region, as well as for adjacent basins. The process of estimating large-scale potential mineral endowments is critical for national and international planning purposes but is a relatively recent and less common undertaking. In many cases, except at a general level, the data and knowledge for a relatively immature terrain is lacking, requiring assessment by analogy with other areas. Commencing in the 1980s, the United States Geological Survey, and subsequently the Geological Survey of Canada evaluated a range of commodities ranging from copper to hydrocarbons with a view to security of supply. They developed innovative approaches to, as far as practical, reduce the uncertainty and maximise the reproducibility of the calculations in information-poor regions. Yet the approach to uranium was relatively ad hoc and incomplete (such as the US Department of Energy NURE project). Other historic attempts, such as the IAEA-NEA International Uranium Resource Evaluation Project (IUREP) in the 1970s, were mainly qualitative. While there is still no systematic global evaluation of undiscovered uranium resources
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis
2013-03-01
AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks
Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.
Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni
2016-01-01
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.
Directory of Open Access Journals (Sweden)
Rodrigo Munguia
Full Text Available In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Mouse genetic approaches applied to the normal tissue radiation response
International Nuclear Information System (INIS)
Haston, Christina K.
2012-01-01
The varying responses of inbred mouse models to radiation exposure present a unique opportunity to dissect the genetic basis of radiation sensitivity and tissue injury. Such studies are complementary to human association studies as they permit both the analysis of clinical features of disease, and of specific variants associated with its presentation, in a controlled environment. Herein I review how animal models are studied to identify specific genetic variants influencing predisposition to radiation-induced traits. Among these radiation-induced responses are documented strain differences in repair of DNA damage and in extent of tissue injury (in the lung, skin, and intestine) which form the base for genetic investigations. For example, radiation-induced DNA damage is consistently greater in tissues from BALB/cJ mice, than the levels in C57BL/6J mice, suggesting there may be an inherent DNA damage level per strain. Regarding tissue injury, strain specific inflammatory and fibrotic phenotypes have been documented for principally, C57BL/6 C3H and A/J mice but a correlation among responses such that knowledge of the radiation injury in one tissue informs of the response in another is not evident. Strategies to identify genetic differences contributing to a trait based on inbred strain differences, which include linkage analysis and the evaluation of recombinant congenic (RC) strains, are presented, with a focus on the lung response to irradiation which is the only radiation-induced tissue injury mapped to date. Such approaches are needed to reveal genetic differences in susceptibility to radiation injury, and also to provide a context for the effects of specific genetic variation uncovered in anticipated clinical association studies. In summary, mouse models can be studied to uncover heritable variation predisposing to specific radiation responses, and such variations may point to pathways of importance to phenotype development in the clinic.
Chien, Tsair-Wei; Shao, Yang; Jen, Dong-Hui
2017-10-27
Many quality-of-life studies have been conducted in healthcare settings, but few have used Microsoft Excel to incorporate Cronbach's α with dimension coefficient (DC) for describing a scale's characteristics. To present a computer module that can report a scale's validity, we manipulated datasets to verify a DC that can be used as a factor retention criterion for demonstrating its usefulness in a patient safety culture survey (PSC). Microsoft Excel Visual Basic for Applications was used to design a computer module for simulating 2000 datasets fitting the Rasch rating scale model. The datasets consisted of (i) five dual correlation coefficients (correl. = 0.3, 0.5, 0.7, 0.9, and 1.0) on two latent traits (i.e., true scores) following a normal distribution and responses to their respective 1/3 and 2/3 items in length; (ii) 20 scenarios of item lengths from 5 to 100; and (iii) 20 sample sizes from 50 to 1000. Each item containing 5-point polytomous responses was uniformly distributed in difficulty across a ± 2 logit range. Three methods (i.e., dimension interrelation ≥0.7, Horn's parallel analysis (PA) 95% confidence interval, and individual random eigenvalues) were used for determining one factor to retain. DC refers to the binary classification (1 as one factor and 0 as many factors) used for examining accuracy with the indicators sensitivity, specificity, and area under receiver operating characteristic curve (AUC). The scale's reliability and DC were simultaneously calculated for each simulative dataset. PSC real data were demonstrated with DC to interpret reports of the unit-based construct validity using the author-made MS Excel module. The DC method presented accurate sensitivity (=0.96), specificity (=0.92) with a DC criterion (≥0.70), and AUC (=0.98) that were higher than those of the two PA methods. PA combined with DC yielded good sensitivity (=0.96), specificity (=1.0) with a DC criterion (≥0.70), and AUC (=0.99). Advances in computer
Directory of Open Access Journals (Sweden)
Le Riche R.
2010-06-01
dimensionality. POD is based on projecting the full field images on a modal basis, constructed from sample simulations, and which can account for the variations of the full field as the elastic constants and other parameters of interest are varied. The fidelity of the decomposition depends on the number of basis vectors used. Typically even complex fields can be accurately represented with no more than a few dozen modes and for our problem we showed that only four or five modes are sufficient [5]. To further reduce the computational cost of the Bayesian approach we use response surface approximations of the POD coefficients of the fields. We show that 3rd degree polynomial response surface approximations provide a satisfying accuracy. The combination of POD decomposition and response surface methodology allows to bring down the computational time of the Bayesian identification to a few days. The proposed approach is applied to Moiré interferometry full field displacement measurements from a traction experiment on a plate with a hole. The laminate with a layup of [45,- 45,0]s is made out of a Toray® T800/3631 graphite/epoxy prepreg. The measured displacement maps are provided in Figure 1. The mean values of the identified properties joint probability density function are in agreement with previous identifications carried out on the same material. Furthermore the probability density function also provides the coefficient of variation with which the properties are identified as well as the correlations between the various properties. We find that while the longitudinal Young’s modulus is identified with good accuracy (low standard deviation, the Poisson’s ration is identified with much higher uncertainty. Several of the properties are also found to be correlated. The identified uncertainty structure of the elastic constants (i.e. variance co-variance matrix has potential benefits to reliability analyses, by allowing a more accurate description of the input uncertainty. An
Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.
2010-06-01
based on projecting the full field images on a modal basis, constructed from sample simulations, and which can account for the variations of the full field as the elastic constants and other parameters of interest are varied. The fidelity of the decomposition depends on the number of basis vectors used. Typically even complex fields can be accurately represented with no more than a few dozen modes and for our problem we showed that only four or five modes are sufficient [5]. To further reduce the computational cost of the Bayesian approach we use response surface approximations of the POD coefficients of the fields. We show that 3rd degree polynomial response surface approximations provide a satisfying accuracy. The combination of POD decomposition and response surface methodology allows to bring down the computational time of the Bayesian identification to a few days. The proposed approach is applied to Moiré interferometry full field displacement measurements from a traction experiment on a plate with a hole. The laminate with a layup of [45,- 45,0]s is made out of a Toray® T800/3631 graphite/epoxy prepreg. The measured displacement maps are provided in Figure 1. The mean values of the identified properties joint probability density function are in agreement with previous identifications carried out on the same material. Furthermore the probability density function also provides the coefficient of variation with which the properties are identified as well as the correlations between the various properties. We find that while the longitudinal Young’s modulus is identified with good accuracy (low standard deviation), the Poisson’s ration is identified with much higher uncertainty. Several of the properties are also found to be correlated. The identified uncertainty structure of the elastic constants (i.e. variance co-variance matrix) has potential benefits to reliability analyses, by allowing a more accurate description of the input uncertainty. An additional
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Camargo, Iara Maria Carneiro de.
2005-01-01
Studies of partition coefficient show that Kp values of metals can vary orders of magnitude according to the soil physical-chemistry characteristics. Therefore, the Kp is a sensible parameter in human health risk assessment model. In general, a default value is adopted by environmental agencies and often it is not represent suitably the soil studied and can cause errors in the risk calculation. The objectives of this work are: evaluate the heavy metals soil contamination around the Figueira coal-fired power plant; determine the metal Kp of As, Cd, Co, Cr, Cu, Mo, Ni, Pb and Zn in soil by the ratio between the metal concentration obtained by concentrate HNO 3 digestion and the metal concentration obtained by extraction with EDTA 0,05 mol L -1 (Kp EDTA ) or Ca(NO 3 ) 2 0,1 mol L -1 (Kp Ca(NO3)2 ); and evaluate the influence of the application of different Kp values in human health risk assessment C-Soil model in risk calculation. The main conclusions of the present study were: As, Cd, Mo, Pb e Zn were the Figueira soil metal contaminants, being As the pollutant of major human health concern; either Kp Ca(NO3)2 or Kp EDTA values could be used for human health risk calculation, in Figueira case, except for Pb, and the Kp EDTA values were preferably recommended due to the less dispersion of their values; the KpC Soil metals default values could be applied for the human health risk calculation in Figueira case, in other words, it would not have necessity to determine Kp values of region (Kp EDTA and Kp Ca(NO3)2 ), except to Pb. (author)
Energy Technology Data Exchange (ETDEWEB)
Marklund, Mette [Parker Institute: Imaging Unit, Frederiksberg Hospital (Denmark)], E-mail: mm@frh.regionh.dk; Christensen, Robin [Parker Institute: Musculoskeletal Statistics Unit, Frederiksberg Hospital (Denmark)], E-mail: robin.christensen@frh.regionh.dk; Torp-Pedersen, Soren [Parker Institute: Imaging Unit, Frederiksberg Hospital (Denmark)], E-mail: stp@frh.regionh.dk; Thomsen, Carsten [Department of Radiology, Rigshospitalet, University of Copenhagen (Denmark)], E-mail: carsten.thomsen@rh.regionh.dk; Nolsoe, Christian P. [Department of Radiology, Koge Hospital (Denmark)], E-mail: cnolsoe@dadlnet.dk
2009-01-15
Purpose: To prospectively investigate the effect on signal intensity (SI) of healthy breast parenchyma on magnetic resonance mammography (MRM) when doubling the contrast dose from 0.1 to 0.2 mmol/kg bodyweight. Materials and methods: Informed consent and institutional review board approval were obtained. Twenty-five healthy female volunteers (median age: 24 years (range: 21-37 years) and median bodyweight: 65 kg (51-80 kg)) completed two dynamic MRM examinations on a 0.6 T open scanner. The inter-examination time was 24 h (23.5-25 h). The following sequences were applied: axial T2W TSE and an axial dynamic T1W FFED, with a total of seven frames. At day 1, an i.v. gadolinium (Gd) bolus injection of 0.1 mmol/kg bodyweight (Omniscan) (low) was administered. On day 2, the contrast dose was increased to 0.2 mmol/kg (high). Injection rate was 2 mL/s (day 1) and 4 mL/s (day 2). Any use of estrogen containing oral contraceptives (ECOC) was recorded. Post-processing with automated subtraction, manually traced ROI (region of interest) and recording of the SI was performed. A random coefficient model was applied. Results: We found an SI increase of 24.2% and 40% following the low and high dose, respectively (P < 0.0001); corresponding to a 65% (95% CI: 37-99%) SI increase, indicating a moderate saturation. Although not statistically significant (P = 0.06), the results indicated a tendency, towards lower maximal SI in the breast parenchyma of ECOC users compared to non-ECOC users. Conclusion: We conclude that the contrast dose can be increased from 0.1 to 0.2 mmol/kg bodyweight, if a better contrast/noise relation is desired but increasing the contrast dose above 0.2 mmol/kg bodyweight is not likely to improve the enhancement substantially due to the moderate saturation observed. Further research is needed to determine the impact of ECOC on the relative enhancement ratio, and further studies are needed to determine if a possible use of ECOC should be considered a compromising
International Nuclear Information System (INIS)
Marklund, Mette; Christensen, Robin; Torp-Pedersen, Soren; Thomsen, Carsten; Nolsoe, Christian P.
2009-01-01
Purpose: To prospectively investigate the effect on signal intensity (SI) of healthy breast parenchyma on magnetic resonance mammography (MRM) when doubling the contrast dose from 0.1 to 0.2 mmol/kg bodyweight. Materials and methods: Informed consent and institutional review board approval were obtained. Twenty-five healthy female volunteers (median age: 24 years (range: 21-37 years) and median bodyweight: 65 kg (51-80 kg)) completed two dynamic MRM examinations on a 0.6 T open scanner. The inter-examination time was 24 h (23.5-25 h). The following sequences were applied: axial T2W TSE and an axial dynamic T1W FFED, with a total of seven frames. At day 1, an i.v. gadolinium (Gd) bolus injection of 0.1 mmol/kg bodyweight (Omniscan) (low) was administered. On day 2, the contrast dose was increased to 0.2 mmol/kg (high). Injection rate was 2 mL/s (day 1) and 4 mL/s (day 2). Any use of estrogen containing oral contraceptives (ECOC) was recorded. Post-processing with automated subtraction, manually traced ROI (region of interest) and recording of the SI was performed. A random coefficient model was applied. Results: We found an SI increase of 24.2% and 40% following the low and high dose, respectively (P < 0.0001); corresponding to a 65% (95% CI: 37-99%) SI increase, indicating a moderate saturation. Although not statistically significant (P = 0.06), the results indicated a tendency, towards lower maximal SI in the breast parenchyma of ECOC users compared to non-ECOC users. Conclusion: We conclude that the contrast dose can be increased from 0.1 to 0.2 mmol/kg bodyweight, if a better contrast/noise relation is desired but increasing the contrast dose above 0.2 mmol/kg bodyweight is not likely to improve the enhancement substantially due to the moderate saturation observed. Further research is needed to determine the impact of ECOC on the relative enhancement ratio, and further studies are needed to determine if a possible use of ECOC should be considered a compromising
Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users
DEFF Research Database (Denmark)
Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane
2011-01-01
a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Ghendrih, P.
1986-10-01
We expand the distribution functions on a basis of Hermite functions and obtain a general scheme to compute the local transport coefficients. The magnetic field dependence due to finite Larmor radius effects during the collision process is taken into account
Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach
DEFF Research Database (Denmark)
Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich
2012-01-01
The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....
Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin
2017-01-01
The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…
Measurements of n-octanol/water partition coefficients (KOW) for highly hydrophobic chemicals, i.e., greater than 108, are extremely difficult and are rarely made, in part because the vanishingly small concentrations in the water phase require extraordinary analytical sensitivity...
CSIR Research Space (South Africa)
Taylor, NJ
2015-03-01
Full Text Available necessitates the use of water use models. The FAO-56 procedure is a simple, convenient and reproducible method, but as canopy cover and height vary greatly among different orchards, crop coefficients may not be readily transferrable from one orchard to another...
Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny
Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.
2016-01-01
Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing pla...
Directory of Open Access Journals (Sweden)
Moisés Henrique Ramos Pereira
2015-12-01
Full Text Available This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance with the distribution of audiovisual indicators extracted over a TV newscast, demonstrating the potential of the approach to support the TV journalistic discourse analysis.This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance
Zedler, Sarah
2011-12-30
We seek to determine if a small number of measurements of upper ocean temperature and currents can be used to make estimates of the drag coefficient that have a smaller range of uncertainty than previously found. We adopt a numerical approach using forward models of the ocean\\'s response to a tropical cyclone, whereby the probability density function of drag coefficient values as a function of wind speed that results from adding realistic levels of noise to the simulated ocean response variables is sought. Allowing the drag coefficient two parameters of freedom, namely the values at 35 and at 45 m/s, we found that the uncertainty in the optimal value is about 20% for levels of instrument noise up to 1 K for a misfit function based on temperature, or 1.0 m/s for a misfit function based on 15 m velocity components. This is within tolerable limits considering the spread of measurement-based drag coefficient estimates. The results are robust for several different instrument arrays; the noise levels do not decrease by much for arrays with more than 40 sensors when the sensor positions are random. Our results suggest that for an ideal case, having a small number of sensors (20-40) in a data assimilation problem would provide sufficient accuracy in the estimated drag coefficient. © 2011 The Oceanographic Society of Japan and Springer.
Applying a new ensemble approach to estimating stock status of marine fisheries around the world
DEFF Research Database (Denmark)
Rosenberg, Andrew A.; Kleisner, Kristin M.; Afflerbach, Jamie
2018-01-01
The exploitation status of marine fisheries stocks worldwide is of critical importance for food security, ecosystem conservation, and fishery sustainability. Applying a suite of data-limited methods to global catch data, combined through an ensemble modeling approach, we provide quantitative esti...
Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.
2016-01-01
In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The
Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M.P.D.; Santilano, A.; Wees, J.D. van; Manzella, A.
2016-01-01
In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The
Directory of Open Access Journals (Sweden)
Gabriel Amador
2016-05-01
Full Text Available In this work, after reviewing two different ways to solve Riccati systems, we are able to present an extensive list of families of integrable nonlinear Schrödinger (NLS equations with variable coefficients. Using Riccati equations and similarity transformations, we are able to reduce them to the standard NLS models. Consequently, we can construct bright-, dark- and Peregrine-type soliton solutions for NLS with variable coefficients. As an important application of solutions for the Riccati equation with parameters, by means of computer algebra systems, it is shown that the parameters change the dynamics of the solutions. Finally, we test numerical approximations for the inhomogeneous paraxial wave equation by the Crank-Nicolson scheme with analytical solutions found using Riccati systems. These solutions include oscillating laser beams and Laguerre and Gaussian beams.
Directory of Open Access Journals (Sweden)
Chenzhong Cao
2008-06-01
Full Text Available The aqueous solubility (logW and n-octanol/water partition coefficient (logPOW are important properties for pharmacology, toxicology and medicinal chemistry. Based on an understanding of the dissolution process, the frontier orbital interaction model was suggested in the present paper to describe the solvent-solute interactions of organohalogen compounds and a general three-parameter model was proposed to predict the aqueous solubility and n-octanol/water partition coefficient for the organohalogen compounds containing nonhydrogen-binding interactions. The model has satisfactory prediction accuracy. Furthermore, every item in the model has a very explicit meaning, which should be helpful to understand the structure-solubility relationship and may be provide a new view on estimation of solubility.
Directory of Open Access Journals (Sweden)
N R Rema
2017-08-01
Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.
Blended Risk Approach in Applying PSA Models to Risk-Based Regulations
International Nuclear Information System (INIS)
Dimitrijevic, V. B.; Chapman, J. R.
1996-01-01
In this paper, the authors will discuss a modern approach in applying PSA models in risk-based regulation. The Blended Risk Approach is a combination of traditional and probabilistic processes. It is receiving increased attention in different industries in the U. S. and abroad. The use of the deterministic regulations and standards provides a proven and well understood basis on which to assess and communicate the impact of change to plant design and operation. Incorporation of traditional values into risk evaluation is working very well in the blended approach. This approach is very application specific. It includes multiple risk attributes, qualitative risk analysis, and basic deterministic principles. In blending deterministic and probabilistic principles, this approach ensures that the objectives of the traditional defense-in-depth concept are not compromised and the design basis of the plant is explicitly considered. (author)
International Nuclear Information System (INIS)
Rundberg, R.S.
1992-01-01
The prediction of radionuclide migration for the purpose of assessing the safety of a nuclear waste repository will be based on a collective knowledge of hydrologic and geochemical properties of the surrounding rock and groundwater. This knowledge along with assumption about the interactions of radionuclides with groundwater and minerals form the scientific basis for a model capable of accurately predicting the repository's performance. Because the interaction of radionuclides in geochemical systems is known to be complicated, several fundamental and empirical approaches to measuring the interaction between radionuclides and the geologic barrier have been developed. The approaches applied to the measurement of sorption involve the use of pure minerals, intact, or crushed rock in dynamic and static experiments. Each approach has its advantages and disadvantages. There is no single best method for providing sorption data for performance assessment models which can be applied without invoking information derived from multiple experiments. 53 refs., 12 figs
A whole-of-curriculum approach to improving nursing students' applied numeracy skills.
van de Mortel, Thea F; Whitehair, Leeann P; Irwin, Pauletta M
2014-03-01
Nursing students often perform poorly on numeracy tests. Whilst one-off interventions have been trialled with limited success, a whole-of-curriculum approach may provide a better means of improving applied numeracy skills. The objective of the study is to assess the efficacy of a whole-of-curriculum approach in improving nursing students' applied numeracy skills. Two cycles of assessment, implementation and evaluation of strategies were conducted following a high fail rate in the final applied numeracy examination in a Bachelor of Nursing (BN) programme. Strategies included an early diagnostic assessment followed by referral to remediation, setting the pass mark at 100% for each of six applied numeracy examinations across the programme, and employing a specialist mathematics teacher to provide consistent numeracy teaching. The setting of the study is one Australian university. 1035 second and third year nursing students enrolled in four clinical nursing courses (CNC III, CNC IV, CNC V and CNC VI) were included. Data on the percentage of students who obtained 100% in their applied numeracy examination in up to two attempts were collected from CNCs III, IV, V and VI between 2008 and 2011. A four by two χ(2) contingency table was used to determine if the differences in the proportion of students achieving 100% across two examination attempts in each CNC were significantly different between 2008 and 2011. The percentage of students who obtained 100% correct answers on the applied numeracy examinations was significantly higher in 2011 than in 2008 in CNC III (χ(2)=272, 3; p<0.001), IV (χ(2)=94.7, 3; p<0.001) and VI (χ(2)=76.3, 3; p<0.001). A whole-of-curriculum approach to developing applied numeracy skills in BN students resulted in a substantial improvement in these skills over four years. Copyright © 2013 Elsevier Ltd. All rights reserved.
Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K
2018-01-03
Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.
Johnes, P. J.
1996-09-01
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley.
Uncharted territory: A complex systems approach as an emerging paradigm in applied linguistics
Directory of Open Access Journals (Sweden)
Weideman, Albert J
2009-12-01
Full Text Available Developing a theory of applied linguistics is a top priority for the discipline today. The emergence of a new paradigm - a complex systems approach - in applied linguistics presents us with a unique opportunity to give prominence to the development of a foundational framework for this design discipline. Far from being a mere philosophical exercise, such a framework will find application in the training and induction of new entrants into the discipline within the developing context of South Africa, as well as internationally.
A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite
DEFF Research Database (Denmark)
Mbamba, Christian Kazadi; Batstone, Damien J.; Flores Alsina, Xavier
2015-01-01
, the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define...... an Arrhenius-style correction of kcryst. The influence of magnesium (a common and representative added impurity) on kcryst was found to be significant but was considered an optional correction because of a lesser influence as compared to that of temperature. Other variables such as ionic strength and pH were...
DEFF Research Database (Denmark)
Marklund, Mette; Christensen, Robin; Torp-Pedersen, Søren
2007-01-01
obtained. Twenty-five healthy female volunteers (median age: 24 years (range: 21-37 years) and median bodyweight: 65 kg (51-80 kg)) completed two dynamic MRM examinations on a 0.6T open scanner. The inter-examination time was 24 h (23.5-25 h). The following sequences were applied: axial T2W TSE...
Applying a radiomics approach to predict prognosis of lung cancer patients
Emaminejad, Nastaran; Yan, Shiju; Wang, Yunzhi; Qian, Wei; Guan, Yubao; Zheng, Bin
2016-03-01
Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS ("yes" and "no") groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients 0.5). Among them, using QI yielded the highest performance.
An extended risk assessment approach for chemical plants applied to a study related to pipe ruptures
International Nuclear Information System (INIS)
Milazzo, Maria Francesca; Aven, Terje
2012-01-01
Risk assessments and Quantitative Risk Assessment (QRA) in particular have been used in the chemical industry for many years to support decision-making on the choice of arrangements and measures associated with chemical processes, transportation and storage of dangerous substances. The assessments have been founded on a risk perspective seeing risk as a function of frequency of events (probability) and associated consequences. In this paper we point to the need for extending this approach to place a stronger emphasis on uncertainties. A recently developed risk framework designed to better reflect such uncertainties is presented and applied to a chemical plant and specifically the analysis of accidental events related to the rupture of pipes. Two different ways of implementing the framework are presented, one based on the introduction of probability models and one without. The differences between the standard approach and the extended approaches are discussed from a theoretical point of view as well as from a practical risk analyst perspective.
An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control
Energy Technology Data Exchange (ETDEWEB)
Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)
2008-09-15
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)
DEFF Research Database (Denmark)
Blekhman, I. I.; Sorokin, V. S.
2016-01-01
A general approach to study effects produced by oscillations applied to nonlinear dynamic systems is developed. It implies a transition from initial governing equations of motion to much more simple equations describing only the main slow component of motions (the vibro-transformed dynamics.......g., the requirement for the involved nonlinearities to be weak. The approach is illustrated by several relevant examples from various fields of science, e.g., mechanics, physics, chemistry and biophysics....... equations). The approach is named as the oscillatory strobodynamics, since motions are perceived as under a stroboscopic light. The vibro-transformed dynamics equations comprise terms that capture the averaged effect of oscillations. The method of direct separation of motions appears to be an efficient...
Wagner, Bjoern; Fischer, Holger; Kansy, Manfred; Seelig, Anna; Assmus, Frauke
2015-02-20
Here we present a miniaturized assay, referred to as Carrier-Mediated Distribution System (CAMDIS) for fast and reliable measurement of octanol/water distribution coefficients, log D(oct). By introducing a filter support for octanol, phase separation from water is facilitated and the tendency of emulsion formation (emulsification) at the interface is reduced. A guideline for the best practice of CAMDIS is given, describing a strategy to manage drug adsorption at the filter-supported octanol/buffer interface. We validated the assay on a set of 52 structurally diverse drugs with known shake flask log D(oct) values. Excellent agreement with literature data (r(2) = 0.996, standard error of estimate, SEE = 0.111), high reproducibility (standard deviation, SD < 0.1 log D(oct) units), minimal sample consumption (10 μL of 100 μM DMSO stock solution) and a broad analytical range (log D(oct) range = -0.5 to 4.2) make CAMDIS a valuable tool for the high-throughput assessment of log D(oc)t. Copyright © 2014 Elsevier B.V. All rights reserved.
Addressing dependability by applying an approach for model-based risk assessment
International Nuclear Information System (INIS)
Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.
2007-01-01
This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development
A new approach for structural health monitoring by applying anomaly detection on strain sensor data
Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik
2014-03-01
Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.
Addressing dependability by applying an approach for model-based risk assessment
Energy Technology Data Exchange (ETDEWEB)
Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no
2007-11-15
This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.
National Research Council Canada - National Science Library
Noble, Chris
2004-01-01
The Hydrogeomophic (HGM) Approach is a method for developing functional indices and the protocols used to apply these indices to the assessment of wetland functions at a site-specific scale The HGM Approach was initially...
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana
2015-01-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Carnero, María Carmen; Gómez, Andrés
2016-04-23
Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a
Theoretical investigation of the extinction coefficient of magnetic fluid
Energy Technology Data Exchange (ETDEWEB)
Fang Xiaopeng; Xuan Yimin, E-mail: ymxuan@mail.njust.edu.cn; Li Qiang [Nanjing University of Science and Technology, School of Energy and Power Engineering (China)
2013-05-15
A new theoretical approach for calculating the extinction coefficient of magnetic fluid is proposed, which is based on molecular dynamics (MD) simulation and T-matrix method. By means of this approach, the influence of particle diameter, particle volume fraction, and external magnetic filed on the extinction coefficient of magnetic fluid is investigated. The results show that the extinction coefficient of the magnetic fluid linearly increases with increase in the particle volume fraction. For a given particle volume fraction, the extinction coefficient increases with increase in the particle diameter which varies from 5 to 20 nm. When a uniform external magnetic filed is applied to the magnetic fluid, the extinction coefficient of the magnetic fluid presents an anisotropic feature. These results agree well with the reported experimental results. The proposed approach is applicable to investigating the optical properties of magnetic fluids.
Tinoco, R. O.; Goldstein, E. B.; Coco, G.
2016-12-01
We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.
DEFF Research Database (Denmark)
Krutulyte, Rasa; Grunert, Klaus G.; Scholderer, Joachim
This paper presents the results of a qualitative pilot study that aimed to uncovering Danish consumers' motives for choosing health food. Schwarzer's (1992) health action process approach (HAPA) was applied to understand the process by which people chose health products. The research focused...... on the role of the behavioural intention predictors such as risk perception, outcome expectations and self-efficacy. The model has been proved to be a useful framework for understanding consumer choosing health food and is substantial in the further application of dietary choice issues....
Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.
van Lent, Wineke A M; Goedbloed, N; van Harten, W H
2009-03-01
To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.
Miles, Rachael E H; Reid, Jonathan P; Riipinen, Ilona
2012-11-08
We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation
Applying a synthetic approach to the resilience of Finnish reindeer herding as a changing livelihood
Directory of Open Access Journals (Sweden)
Simo Sarkki
2016-12-01
Full Text Available Reindeer herding is an emblematic livelihood for Northern Finland, culturally important for local people and valuable in tourism marketing. We examine the livelihood resilience of Finnish reindeer herding by narrowing the focus of general resilience on social-ecological systems (SESs to a specific livelihood while also acknowledging wider contexts in which reindeer herding is embedded. The questions for specified resilience can be combined with the applied DPSIR approach (Drivers; Pressures: resilience to what; State: resilience of what; Impacts: resilience for whom; Responses: resilience by whom and how. This paper is based on a synthesis of the authors' extensive anthropological fieldwork on reindeer herding and other land uses in Northern Finland. Our objective is to synthesize various opportunities and challenges that underpin the resilience of reindeer herding as a viable livelihood. The DPSIR approach, applied here as a three step procedure, helps focus the analysis on different components of SES and their dynamic interactions. First, various land use-related DPSIR factors and their relations (synergies and trade-offs to reindeer herding are mapped. Second, detailed DPSIR factors underpinning the resilience of reindeer herding are identified. Third, examples of interrelations between DPSIR factors are explored, revealing the key dynamics between Pressures, State, Impacts, and Responses related to the livelihood resilience of reindeer herding. In the Discussion section, we recommend that future applications of the DPSIR approach in examining livelihood resilience should (1 address cumulative pressures, (2 consider the state dimension as more tuned toward the social side of SES, (3 assess both the negative and positive impacts of environmental change on the examined livelihood by a combination of science led top-down and participatory bottom-up approaches, and (4 examine and propose governance solutions as well as local adaptations by
Boehm, K; Rösgen, J; Hinz, H-J
2006-02-15
A new method is described that permits the continuous and synchronous determination of heat capacity and expansibility data. We refer to it as pressure-modulated differential scanning calorimetry (PMDSC), as it involves a standard DSC temperature scan and superimposes on it a pressure modulation of preselected format. The power of the method is demonstrated using salt solutions for which the most accurate heat capacity and expansibility data exist in the literature. As the PMDSC measurements could reproduce the parameters with high accuracy and precision, we applied the method also to an aqueous suspension of multilamellar DSPC vesicles for which no expansibility data had been reported previously for the transition region. Excellent agreement was obtained between data from PMDSC and values from independent direct differential scanning densimetry measurements. The basic theoretical background of the method when using sawtooth-like pressure ramps is given under Supporting Information, and a complete statistical thermodynamic derivation of the general equations is presented in the accompanying paper.
Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah
2017-03-01
Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.
Perkins, Matthew B; Jensen, Peter S; Jaccard, James; Gollwitzer, Peter; Oettingen, Gabriele; Pappadopulos, Elizabeth; Hoagwood, Kimberly E
2007-03-01
Despite major recent research advances, large gaps exist between accepted mental health knowledge and clinicians' real-world practices. Although hundreds of studies have successfully utilized basic behavioral science theories to understand, predict, and change patients' health behaviors, the extent to which these theories-most notably the theory of reasoned action (TRA) and its extension, the theory of planned behavior (TPB)-have been applied to understand and change clinician behavior is unclear. This article reviews the application of theory-driven approaches to understanding and changing clinician behaviors. MEDLINE and PsycINFO databases were searched, along with bibliographies, textbooks on health behavior or public health, and references from experts, to find article titles that describe theory-driven approaches (TRA or TPB) to understanding and modifying health professionals' behavior. A total of 19 articles that detailed 20 studies described the use of TRA or TPB and clinicians' behavior. Eight articles describe the use of TRA or TPB with physicians, four relate to nurses, three relate to pharmacists, and two relate to health workers. Only two articles applied TRA or TPB to mental health clinicians. The body of work shows that different constructs of TRA or TPB predict intentions and behavior among different groups of clinicians and for different behaviors and guidelines. The number of studies on this topic is extremely limited, but they offer a rationale and a direction for future research as well as a theoretical basis for increasing the specificity and efficiency of clinician-targeted interventions.
Malandraki, Georgia A; Rajappa, Akila; Kantarcigil, Cagla; Wagner, Elise; Ivey, Chandra; Youse, Kathleen
2016-04-01
To examine the effects of the Intensive Dysphagia Rehabilitation approach on physiological and functional swallowing outcomes in adults with neurogenic dysphagia. Intervention study; before-after trial with 4-week follow-up through an online survey. Outpatient university clinics. A consecutive sample of subjects (N=10) recruited from outpatient university clinics. All subjects were diagnosed with adult-onset neurologic injury or disease. Dysphagia diagnosis was confirmed through clinical and endoscopic swallowing evaluations. No subjects withdrew from the study. Participants completed the 4-week Intensive Dysphagia Rehabilitation protocol, including 2 oropharyngeal exercise regimens, a targeted swallowing routine using salient stimuli, and caregiver participation. Treatment included hourly sessions twice per week and home practice for approximately 45 min/d. Outcome measures assessed pre- and posttreatment included airway safety using an 8-point Penetration Aspiration Scale, lingual isometric pressures, self-reported swallowing-related quality of life (QOL), and level of oral intake. Also, patients were monitored for adverse dysphagia-related effects. QOL and adverse effects were also assessed at the 4-week follow-up (online survey). The Intensive Dysphagia Rehabilitation approach was effective in improving maximum and mean Penetration Aspiration Scale scores (PDysphagia Rehabilitation approach was safe and improved physiological and some functional swallowing outcomes in our sample; however, further investigation is needed before it can be widely applied. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Coefficient Alpha: A Reliability Coefficient for the 21st Century?
Yang, Yanyun; Green, Samuel B.
2011-01-01
Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…
The mapping approach in the path integral formalism applied to curve-crossing systems
International Nuclear Information System (INIS)
Novikov, Alexey; Kleinekathoefer, Ulrich; Schreiber, Michael
2004-01-01
The path integral formalism in a combined phase-space and coherent-state representation is applied to the problem of curve-crossing dynamics. The system of interest is described by two coupled one-dimensional harmonic potential energy surfaces interacting with a heat bath consisting of harmonic oscillators. The mapping approach is used to rewrite the Lagrangian function of the electronic part of the system. Using the Feynman-Vernon influence-functional method the bath is eliminated whereas the non-Gaussian part of the path integral is treated using the generating functional for the electronic trajectories. The dynamics of a Gaussian wave packet is analyzed along a one-dimensional reaction coordinate within a perturbative treatment for a small coordinate shift between the potential energy surfaces
Software engineering techniques applied to agricultural systems an object-oriented and UML approach
Papajorgji, Petraq J
2014-01-01
Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system. This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling. It may be highly useful to undergraduate and graduate students as t...
Pointing and the Evolution of Language: An Applied Evolutionary Epistemological Approach
Directory of Open Access Journals (Sweden)
Nathalie Gontier
2013-07-01
Full Text Available Numerous evolutionary linguists have indicated that human pointing behaviour might be associated with the evolution of language. At an ontogenetic level, and in normal individuals, pointing develops spontaneously and the onset of human pointing precedes as well as facilitates phases in speech and language development. Phylogenetically, pointing behaviour might have preceded and facilitated the evolutionary origin of both gestural and vocal language. Contrary to wild non-human primates, captive and human-reared nonhuman primates also demonstrate pointing behaviour. In this article, we analyse the debates on pointing and its role it might have played in language evolution from a meta-level. From within an Applied Evolutionary Epistemological approach, we examine how exactly we can determine whether pointing has been a unit, a level or a mechanism in language evolution.
Directory of Open Access Journals (Sweden)
Thomas Heckelei
2012-05-01
Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.
DEFF Research Database (Denmark)
Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik
2016-01-01
One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class......, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators...... and values of different stakeholders (i.e. institutions, educators, learners, and external agents), which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators...
Grose, Vernon L.
1985-12-01
The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.
Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.
Directory of Open Access Journals (Sweden)
Simon T Maddock
Full Text Available Mitochondrial genome (mitogenome sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent to produce seven (near- complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.
Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.
Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J
2016-01-01
Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.
Fatigue damage approach applied to Li-ion batteries ageing characterization
Energy Technology Data Exchange (ETDEWEB)
Dudézert, C. [Renault, Technocentre, Guyancourt (France); Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France); CEA/LITEN, Grenoble (France); Reynier, Y. [CEA/LITEN, Grenoble (France); Duffault, J.-M. [Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France); Franger, S., E-mail: sylvain.franger@u-psud.fr [Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France)
2016-11-15
Reliability of energy storage devices is one of the foremost concerns in electric vehicles (EVs) development. Battery ageing, i.e. the degradation of battery energy and power, depends mainly on time, on the environmental conditions and on the in-use solicitations endured by the storage system. In case of EV, the heavy dependence of the battery use with the car performance, the driving cycles, and the weather conditions make the battery life prediction an intricate issue. Mechanical physicists have developed a quick and exhaustive methodology to diagnose reliability of complex structures enduring complex loads. This “fatigue” approach expresses the performance fading due to a complex load through the evolution corresponding to basic ones. Thus, a state of health variable named “damage” binds the load history and ageing. The battery ageing study described here consists in applying this mechanical approach to electrochemical systems by connecting the ageing factors with the battery characteristics evolutions. In that way, a specific “fatigue” test protocol has been established. This experimental confrontation has led to distinguishing calendar from cycling ageing mechanisms.
Fatigue damage approach applied to Li-ion batteries ageing characterization
International Nuclear Information System (INIS)
Dudézert, C.; Reynier, Y.; Duffault, J.-M.; Franger, S.
2016-01-01
Reliability of energy storage devices is one of the foremost concerns in electric vehicles (EVs) development. Battery ageing, i.e. the degradation of battery energy and power, depends mainly on time, on the environmental conditions and on the in-use solicitations endured by the storage system. In case of EV, the heavy dependence of the battery use with the car performance, the driving cycles, and the weather conditions make the battery life prediction an intricate issue. Mechanical physicists have developed a quick and exhaustive methodology to diagnose reliability of complex structures enduring complex loads. This “fatigue” approach expresses the performance fading due to a complex load through the evolution corresponding to basic ones. Thus, a state of health variable named “damage” binds the load history and ageing. The battery ageing study described here consists in applying this mechanical approach to electrochemical systems by connecting the ageing factors with the battery characteristics evolutions. In that way, a specific “fatigue” test protocol has been established. This experimental confrontation has led to distinguishing calendar from cycling ageing mechanisms.
Curvature of Indoor Sensor Network: Clustering Coefficient
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available We investigate the geometric properties of the communication graph in realistic low-power wireless networks. In particular, we explore the concept of the curvature of a wireless network via the clustering coefficient. Clustering coefficient analysis is a computationally simplified, semilocal approach, which nevertheless captures such a large-scale feature as congestion in the underlying network. The clustering coefficient concept is applied to three cases of indoor sensor networks, under varying thresholds on the link packet reception rate (PRR. A transition from positive curvature (“meshed” network to negative curvature (“core concentric” network is observed by increasing the threshold. Even though this paper deals with network curvature per se, we nevertheless expand on the underlying congestion motivation, propose several new concepts (network inertia and centroid, and finally we argue that greedy routing on a virtual positively curved network achieves load balancing on the physical network.
International Nuclear Information System (INIS)
Igamov, S.B.; Yarmukhamedov, R.
2007-01-01
A modified two-body potential approach is proposed for determination of both the asymptotic normalization coefficient (ANC) (or the respective nuclear vertex constant (NVC)) for the A+a->B (for the virtual decay B->A+a) from an analysis of the experimental S-factor for the peripheral direct capture a+A->B+γ reaction and the astrophysical S-factor, S(E), at low experimentally inaccessible energy regions. The approach proposed involves two additional conditions which verify the peripheral character of the considered reaction and expresses S(E) in terms of the ANC. The connection between NVC (ANC) and the effective range parameters for Aa-scattering is derived. To test this approach we reanalyse the precise experimental astrophysical S-factors for t+α->Li7+γ reaction at energies E= Li7(g.s.), α+t->Li7(0.478 MeV) and of S(E) at E=<50 keV. These ANC values have been used for getting information about the ''indirect'' measured values of the effective range parameters and the p-wave phase shift for αt-scattering in the energy range of 100-bar E-bar 180 keV
Coefficient estimates of negative powers and inverse coefficients for ...
Indian Academy of Sciences (India)
and the inequality is sharp for the inverse of the Koebe function k(z) = z/(1 − z)2. An alternative approach to the inverse coefficient problem for functions in the class S has been investigated by Schaeffer and Spencer [27] and FitzGerald [6]. Although, the inverse coefficient problem for the class S has been completely solved ...
Applying radiation approaches to the control of public risks from chemical agents
International Nuclear Information System (INIS)
Alexander, R.E.
1989-01-01
IF a hazardous agent has a threshold, prevention is the obvious measure of success. To the eyes of this author, success is also achieveable for a hazardous agent that may have no threshold and that causes its effects in a probabilistic manner. First, the technical people responsible for protection must be given a reasonable, well defined risk objective by governmental authorities. To the extent that they meet that objective (1) without unnecessarily increasing operational costs, (2) without interfering unnecessarily with operational activities, and (3) without diverting resources away from greater risks, they are successful. Considering these three qualifications, radiation protection for members of the public can hardly be presented as the panacea for other hazardous agents. It would be an error to dismiss the improvement opportunities discussed above as being of acdemic interest only. Decades of experience with radiation have demonstrated that these problems are both real adn significant. In the US the axioms discussed above are accepted as scientific fact for radiation by many policy makers, the news media and the public. For any operation the collective dose is calculated using zero dose as the lower limit of integration, the results are converted to cancer deaths using the risk coefficients, and decisions are made as though these deaths would actually occur without governmental intervention. As a result, billions of dollars and a very large number of highly skilled persons are being expended to protect against radiation doses far smaller than geographical variations in the natural radiation background. These expenditures are demanded by, and required for well-meaning, nontechnical people who have been misled. It is often stated by knowledgeable people that if the degree of protection required for radiation were also to be requested for the other hazards, human progress would come to a halt. If the radiation approaches are to be used in the control of public
Redlin, Matthias; Kukucka, Marian; Boettcher, Wolfgang; Schoenfeld, Helge; Huebler, Michael; Kuppe, Hermann; Habazettl, Helmut
2013-09-01
Recently we suggested a comprehensive blood-sparing approach in pediatric cardiac surgery that resulted in no transfusion in 71 infants (25%), postoperative transfusion only in 68 (24%), and intraoperative transfusion in 149 (52%). We analyzed the effects of transfusion on postoperative morbidity and mortality in the same cohort of patients. The effect of transfusion on the length of mechanical ventilation and intensive care unit stay was assessed using Kaplan-Meier curves. To assess whether transfusion independently determined the length of mechanical ventilation and length of intensive care unit stay, a multivariate model was applied. Additionally, in the subgroup of transfused infants, the effect of the applied volume of packed red blood cells was assessed. The median length of mechanical ventilation was 11 hours (interquartile range, 9-18 hours), 33 hours (interquartile range, 18-80 hours), and 93 hours (interquartile range, 34-161 hours) in the no transfusion, postoperative transfusion only, and intraoperative transfusion groups, respectively (P interquartile range, 1-2 days), 3.5 days (interquartile range, 2-5 days), and 8 days (interquartile range, 3-9 days; P < .00001). The multivariate hazard ratio for early extubation was 0.24 (95% confidence interval, 0.16-0.35) and 0.37 (95% confidence interval, 0.25-0.55) for the intraoperative transfusion and postoperative transfusion only groups, respectively (P < .00001). In addition, the cardiopulmonary time, body weight, need for reoperation, and hemoglobin during cardiopulmonary bypass affected the length of mechanical ventilation. Similar results were obtained for the length of intensive care unit stay. In the subgroup of transfused infants, the volume of packed red blood cells also independently affected both the length of mechanical ventilation and the length of intensive care unit stay. The incidence and volume of blood transfusion markedly affects postoperative morbidity in pediatric cardiac surgery. These
An approach to applying quality assurance to nuclear fuel waste disposal
International Nuclear Information System (INIS)
Cooper, R.B.; Abel, R.
1996-12-01
An approach to developing and applying a quality assurance program for a nuclear fuel waste disposal facility is described. The proposed program would be based on N286-series standards used for quality assurance programs in nuclear power plants, and would cover all aspects of work across all stages of the project, from initial feasibility studies to final closure of the vault. A quality assurance manual describing the overall quality assurance program and its elements would be prepared at the outset. Planning requirements of the quality assurance program would be addressed in a comprehensive plan for the project. Like the QA manual, this plan would be prepared at the outset of the project and updated at each stage. Particular attention would be given to incorporating the observational approach in procedures for underground engineering, where the ability to adapt designs and mining techniques to changing ground conditions would be essential. Quality verification requirements would be addressed through design reviews, peer reviews, inspections and surveillance, equipment calibration and laboratory analysis checks, and testing programs. Regular audits and program reviews would help to assess the state of implementation, degree of conformance to standards, and effectiveness of the quality assurance program. Audits would be particularly useful in assessing the quality systems of contractors and suppliers, and in verifying the completion of work at the end of stages. Since a nuclear fuel waste disposal project would span a period of about 90 years, a key function of the quality assurance program would be to ensure the continuity of knowledge and the transfer of experience from one stage to another This would be achieved by maintaining a records management system throughout the life of the project, by ensuring that work procedures were documented and kept current with new technologies and practices, and by instituting training programs that made use of experience gained
Novel approach of fragment-based lead discovery applied to renin inhibitors.
Tawada, Michiko; Suzuki, Shinkichi; Imaeda, Yasuhiro; Oki, Hideyuki; Snell, Gyorgy; Behnke, Craig A; Kondo, Mitsuyo; Tarui, Naoki; Tanaka, Toshimasa; Kuroita, Takanobu; Tomimoto, Masaki
2016-11-15
A novel approach was conducted for fragment-based lead discovery and applied to renin inhibitors. The biochemical screening of a fragment library against renin provided the hit fragment which showed a characteristic interaction pattern with the target protein. The hit fragment bound only to the S1, S3, and S3 SP (S3 subpocket) sites without any interactions with the catalytic aspartate residues (Asp32 and Asp215 (pepsin numbering)). Prior to making chemical modifications to the hit fragment, we first identified its essential binding sites by utilizing the hit fragment's substructures. Second, we created a new and smaller scaffold, which better occupied the identified essential S3 and S3 SP sites, by utilizing library synthesis with high-throughput chemistry. We then revisited the S1 site and efficiently explored a good building block attaching to the scaffold with library synthesis. In the library syntheses, the binding modes of each pivotal compound were determined and confirmed by X-ray crystallography and the library was strategically designed by structure-based computational approach not only to obtain a more active compound but also to obtain informative Structure Activity Relationship (SAR). As a result, we obtained a lead compound offering synthetic accessibility as well as the improved in vitro ADMET profiles. The fragments and compounds possessing a characteristic interaction pattern provided new structural insights into renin's active site and the potential to create a new generation of renin inhibitors. In addition, we demonstrated our FBDD strategy integrating highly sensitive biochemical assay, X-ray crystallography, and high-throughput synthesis and in silico library design aimed at fragment morphing at the initial stage was effective to elucidate a pocket profile and a promising lead compound. Copyright © 2016 Elsevier Ltd. All rights reserved.
Luther, Matt; Gardiner, Fergus; Lenson, Shane; Caldicott, David; Harris, Ryan; Sabet, Ryan; Malloy, Mark; Perkins, Jo
2018-04-01
Specific Event Identifiers a. Event type: Outdoor music festival. b. Event onset date: December 3, 2016. c. Location of event: Regatta Point, Commonwealth Park. d. Geographical coordinates: Canberra, Australian Capital Territory (ACT), Australia (-35.289002, 149.131957, 600m). e. Dates and times of observation in latitude, longitude, and elevation: December 3, 2016, 11:00-23:00. f. Response type: Event medical support. Abstract Introduction Young adult patrons are vulnerable to risk-taking behavior, including drug taking, at outdoor music festivals. Therefore, the aim of this field report is to discuss the on-site medical response during a music festival, and subsequently highlight observed strategies aimed at minimizing substance abuse harm. The observed outdoor music festival was held in Canberra (Australian Capital Territory [ACT], Australia) during the early summer of 2016, with an attendance of 23,008 patrons. First aid and on-site medical treatment data were gained from the relevant treatment area and service. The integrated first aid service provided support to 292 patients. Final analysis consisted of 286 patients' records, with 119 (41.6%) males and 167 (58.4%) females. Results from this report indicated that drug intoxication was an observed event issue, with 15 (5.1%) treated on site and 13 emergency department (ED) presentations, primarily related to trauma or medical conditions requiring further diagnostics. This report details an important public health need, which could be met by providing a coordinated approach, including a robust on-site medical service, accepting intrinsic risk-taking behavior. This may include on-site drug-checking, providing reliable information on drug content with associated education. Luther M , Gardiner F , Lenson S , Caldicott D , Harris R , Sabet R , Malloy M , Perkins J . An effective risk minimization strategy applied to an outdoor music festival: a multi-agency approach. Prehosp Disaster Med. 2018;33(2):220-224.
A Monte Carlo approach applied to ultrasonic non-destructive testing
Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.
2012-04-01
Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface
VIPAR, a quantitative approach to 3D histopathology applied to lymphatic malformations.
Hägerling, René; Drees, Dominik; Scherzinger, Aaron; Dierkes, Cathrin; Martin-Almedina, Silvia; Butz, Stefan; Gordon, Kristiana; Schäfers, Michael; Hinrichs, Klaus; Ostergaard, Pia; Vestweber, Dietmar; Goerge, Tobias; Mansour, Sahar; Jiang, Xiaoyi; Mortimer, Peter S; Kiefer, Friedemann
2017-08-17
Lack of investigatory and diagnostic tools has been a major contributing factor to the failure to mechanistically understand lymphedema and other lymphatic disorders in order to develop effective drug and surgical therapies. One difficulty has been understanding the true changes in lymph vessel pathology from standard 2D tissue sections. VIPAR (volume information-based histopathological analysis by 3D reconstruction and data extraction), a light-sheet microscopy-based approach for the analysis of tissue biopsies, is based on digital reconstruction and visualization of microscopic image stacks. VIPAR allows semiautomated segmentation of the vasculature and subsequent nonbiased extraction of characteristic vessel shape and connectivity parameters. We applied VIPAR to analyze biopsies from healthy lymphedematous and lymphangiomatous skin. Digital 3D reconstruction provided a directly visually interpretable, comprehensive representation of the lymphatic and blood vessels in the analyzed tissue volumes. The most conspicuous features were disrupted lymphatic vessels in lymphedematous skin and a hyperplasia (4.36-fold lymphatic vessel volume increase) in the lymphangiomatous skin. Both abnormalities were detected by the connectivity analysis based on extracted vessel shape and structure data. The quantitative evaluation of extracted data revealed a significant reduction of lymphatic segment length (51.3% and 54.2%) and straightness (89.2% and 83.7%) for lymphedematous and lymphangiomatous skin, respectively. Blood vessel length was significantly increased in the lymphangiomatous sample (239.3%). VIPAR is a volume-based tissue reconstruction data extraction and analysis approach that successfully distinguished healthy from lymphedematous and lymphangiomatous skin. Its application is not limited to the vascular systems or skin. Max Planck Society, DFG (SFB 656), and Cells-in-Motion Cluster of Excellence EXC 1003.
A novel bi-level meta-analysis approach: applied to biological pathway analysis.
Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin
2016-02-01
The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e
A strategy to apply a graded approach to a new research reactor I and C design
International Nuclear Information System (INIS)
Suh, Yong Suk; Park, Jae Kwan; Kim, Taek Kyu; Bae, Sang Hoon; Baang, Dane; Kim, Young Ki
2012-01-01
A project for the development of a new research reactor (NRR) was launched by KAERI in 2012. It has two purposes: 1) providing a facility for radioisotope production, neutron transmutation doping, and semiconductor wafer doping, and 2) obtaining a standard model for exporting a research reactor (RR). The instrumentation and control (I and C) design should reveal an appropriate architecture for the NRR export. The adoption of a graded approach (GA) was taken into account to design the I and C and architecture. Although the GA for RRs is currently under development by the IAEA, it has been recommended and applied in many areas of nuclear facilities. The Canadian Nuclear Safety Commission allows for the use of a GA for RRs to meet the safety requirements. Germany applied the GA to a decommissioning project. It categorized the level of complexity of the decommissioning project using the GA. In the case of 10 C.F.R. Part 830 830.7, a contractor must use a GA to implement the requirements of the part, document the basis of the GA used, and submit that document to U.S. DOE. It mentions that a challenge is the inconsistent application of GA on DOE programs. RG 1.176 states that graded quality assurance brings benefits of resource allocation based on the safety significance of the items. The U.S. NRC also applied the GA to decommissioning small facilities. The NASA published a handbook for risk informed decision making that is conducted using a GA. ISATR67.04.09 2005 supplements ANSI/ISA.S67.04.01. 2000 and ISA RP67.04.02 2000 in determining the setpoint using a GA. The GA is defined as a risk informed approach that, without compromising safety, allows safety requirements to be implemented in such a way that the level of design, analysis, and documentation are commensurate with the potential risks of the reactor. The IAEA is developing a GA through DS351 and has recommended applying it to a reactor design according to power and hazarding level. Owing to the wide range of RR
A strategy to apply a graded approach to a new research reactor I and C design
Energy Technology Data Exchange (ETDEWEB)
Suh, Yong Suk; Park, Jae Kwan; Kim, Taek Kyu; Bae, Sang Hoon; Baang, Dane; Kim, Young Ki [KAERI, Daejeon (Korea, Republic of)
2012-10-15
A project for the development of a new research reactor (NRR) was launched by KAERI in 2012. It has two purposes: 1) providing a facility for radioisotope production, neutron transmutation doping, and semiconductor wafer doping, and 2) obtaining a standard model for exporting a research reactor (RR). The instrumentation and control (I and C) design should reveal an appropriate architecture for the NRR export. The adoption of a graded approach (GA) was taken into account to design the I and C and architecture. Although the GA for RRs is currently under development by the IAEA, it has been recommended and applied in many areas of nuclear facilities. The Canadian Nuclear Safety Commission allows for the use of a GA for RRs to meet the safety requirements. Germany applied the GA to a decommissioning project. It categorized the level of complexity of the decommissioning project using the GA. In the case of 10 C.F.R. Part 830 830.7, a contractor must use a GA to implement the requirements of the part, document the basis of the GA used, and submit that document to U.S. DOE. It mentions that a challenge is the inconsistent application of GA on DOE programs. RG 1.176 states that graded quality assurance brings benefits of resource allocation based on the safety significance of the items. The U.S. NRC also applied the GA to decommissioning small facilities. The NASA published a handbook for risk informed decision making that is conducted using a GA. ISATR67.04.09 2005 supplements ANSI/ISA.S67.04.01. 2000 and ISA RP67.04.02 2000 in determining the setpoint using a GA. The GA is defined as a risk informed approach that, without compromising safety, allows safety requirements to be implemented in such a way that the level of design, analysis, and documentation are commensurate with the potential risks of the reactor. The IAEA is developing a GA through DS351 and has recommended applying it to a reactor design according to power and hazarding level. Owing to the wide range of RR
Directory of Open Access Journals (Sweden)
Norman Wiernasz
2017-05-01
Full Text Available As fragile food commodities, microbial, and organoleptic qualities of fishery and seafood can quickly deteriorate. In this context, microbial quality and security improvement during the whole food processing chain (from catch to plate, using hurdle technology, a combination of mild preserving technologies such as biopreservation, modified atmosphere packaging, and superchilling, are of great interest. As natural flora and antimicrobial metabolites producers, lactic acid bacteria (LAB are commonly studied for food biopreservation. Thirty-five LAB known to possess interesting antimicrobial activity were selected for their potential application as bioprotective agents as a part of hurdle technology applied to fishery products. The selection approach was based on seven criteria including antimicrobial activity, alteration potential, tolerance to chitosan coating, and superchilling process, cross inhibition, biogenic amines production (histamine, tyramine, and antibiotics resistance. Antimicrobial activity was assessed against six common spoiling bacteria in fishery products (Shewanella baltica, Photobacterium phosphoreum, Brochothrix thermosphacta, Lactobacillus sakei, Hafnia alvei, Serratia proteamaculans and one pathogenic bacterium (Listeria monocytogenes in co-culture inhibitory assays miniaturized in 96-well microtiter plates. Antimicrobial activity and spoilage evaluation, both performed in cod and salmon juice, highlighted the existence of sensory signatures and inhibition profiles, which seem to be species related. Finally, six LAB with no unusual antibiotics resistance profile nor histamine production ability were selected as bioprotective agents for further in situ inhibitory assays in cod and salmon based products, alone or in combination with other hurdles (chitosan, modified atmosphere packing, and superchilling.
Capriotti, Margherita; Sternini, Simone; Lanza di Scalea, Francesco; Mariani, Stefano
2016-04-01
In the field of non-destructive evaluation, defect detection and visualization can be performed exploiting different techniques relying either on an active or a passive approach. In the following paper the passive technique is investigated due to its numerous advantages and its application to thermography is explored. In previous works, it has been shown that it is possible to reconstruct the Green's function between any pair of points of a sensing grid by using noise originated from diffuse fields in acoustic environments. The extraction of the Green's function can be achieved by cross-correlating these random recorded waves. Averaging, filtering and length of the measured signals play an important role in this process. This concept is here applied in an NDE perspective utilizing thermal fluctuations present on structural materials. Temperature variations interacting with thermal properties of the specimen allow for the characterization of the material and its health condition. The exploitation of the thermographic image resolution as a dense grid of sensors constitutes the basic idea underlying passive thermography. Particular attention will be placed on the creation of a proper diffuse thermal field, studying the number, placement and excitation signal of heat sources. Results from numerical simulations will be presented to assess the capabilities and performances of the passive thermal technique devoted to defect detection and imaging of structural components.
Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing
Brownlee, P. P.; Kirtz, M. K.
1981-01-01
In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.
Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano
2017-04-01
Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.
Osterholm, Michael T; Ostrowsky, Julie; Farrar, Jeff A; Gravani, Robert B; Tauxe, Robert V; Buchanan, Robert L; Hedberg, Craig W
2009-07-01
An independent collaborative approach was developed for stimulating research on high-priority food safety issues. The Fresh Express Produce Safety Research Initiative was launched in 2007 with $2 million in unrestricted funds from industry and independent direction and oversight from a scientific advisory panel consisting of nationally recognized food safety experts from academia and government agencies. The program had two main objectives: (i) to fund rigorous, innovative, and multidisciplinary research addressing the safety of lettuce, spinach, and other leafy greens and (ii) to share research findings as widely and quickly as possible to support the development of advanced safeguards within the fresh-cut produce industry. Sixty-five proposals were submitted in response to a publicly announced request for proposals and were competitively evaluated. Nine research projects were funded to examine underlying factors involved in Escherichia coli O157:H7 contamination of lettuce, spinach, and other leafy greens and potential strategies for preventing the spread of foodborne pathogens. Results of the studies, published in the Journal of Food Protection, help to identify promising directions for future research into potential sources and entry points of contamination and specific factors associated with harvesting, processing, transporting, and storing produce that allow contaminants to persist and proliferate. The program provides a model for leveraging the strengths of industry, academia, and government to address high-priority issues quickly and directly through applied research. This model can be productively extended to other pathogens and other leafy and nonleafy produce.
Energy Technology Data Exchange (ETDEWEB)
Vlah, Zvonimir; Seljak, Uroš [Institute for Theoretical Physics, University of Zürich, Zürich (Switzerland); Okumura, Teppei [Institute for the Early Universe, Ewha Womans University, Seoul, S. Korea (Korea, Republic of); Desjacques, Vincent, E-mail: zvlah@physik.uzh.ch, E-mail: seljak@physik.uzh.ch, E-mail: teppei@ewha.ac.kr, E-mail: Vincent.Desjacques@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics (CAP) Université de Genéve, Genéve (Switzerland)
2013-10-01
Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.
Applying the reasoned action approach to understanding health protection and health risk behaviors.
Conner, Mark; McEachan, Rosemary; Lawton, Rebecca; Gardner, Peter
2017-12-01
The Reasoned Action Approach (RAA) developed out of the Theory of Reasoned Action and Theory of Planned Behavior but has not yet been widely applied to understanding health behaviors. The present research employed the RAA in a prospective design to test predictions of intention and action for groups of protection and risk behaviors separately in the same sample. To test the RAA for health protection and risk behaviors. Measures of RAA components plus past behavior were taken in relation to eight protection and six risk behaviors in 385 adults. Self-reported behavior was assessed one month later. Multi-level modelling showed instrumental attitude, experiential attitude, descriptive norms, capacity and past behavior were significant positive predictors of intentions to engage in protection or risk behaviors. Injunctive norms were only significant predictors of intention in protection behaviors. Autonomy was a significant positive predictor of intentions in protection behaviors and a negative predictor in risk behaviors (the latter relationship became non-significant when controlling for past behavior). Multi-level modelling showed that intention, capacity, and past behavior were significant positive predictors of action for both protection and risk behaviors. Experiential attitude and descriptive norm were additional significant positive predictors of risk behaviors. The RAA has utility in predicting both protection and risk health behaviors although the power of predictors may vary across these types of health behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Jeffrey S. Harrison
2015-09-01
Full Text Available Objective – This article provides a brief overview of stakeholder theory, clears up some widely held misconceptions, explains the importance of examining stakeholder theory from a variety of international perspectives and how this type of research will advance management theory, and introduces the other articles in the special issue. Design/methodology/approach – Some of the foundational ideas of stakeholder theory are discussed, leading to arguments about the importance of the theory to management research, especially in an international context. Findings – Stakeholder theory is found to be a particularly useful perspective for addressing some of the important issues in business from an international perspective. It offers an opportunity to reinterpret a variety of concepts, models and phenomena across may different disciplines. Practical implications – The concepts explored in this article may be applied in many contexts, domestically and internationally, and across business disciplines as diverse as economics, public administration, finance, philosophy, marketing, law, and management. Originality/value – Research on stakeholder theory in an international context is both lacking and sorely needed. This article and the others in this special issue aim to help fill that void.
Kenjabaev, Shavkat; Dernedde, Yvonne; Frede, Hans-Georg; Stulina, Galina
2014-05-01
Determination of the actual crop evapotranspiration (ETc) during the growing period is important for accurate irrigation scheduling in arid and semi-arid regions. Development of a crop coefficient (Kc) can enhance ETc estimations in relation to specific crop phenological development. This research was conducted to determine daily and growth-stage-specific Kc and ETc values for cotton (Gossypium hirsutum L.), winter wheat (Triticum aestivum L.) and maize (Zea mays L.) for silage at fields in Fergana Valley (Uzbekistan). The soil water balance model - Budget with integration of the dual crop procedure of the FAO-56 was used to estimate the ETc and separate it into evaporation (Ec) and transpiration (Tc) components. An empirical equation was developed to determine the daily Kc values based on the estimated Ec and Tc. The ETc, Kc determination and comparison to existing FAO Kc values were performed based on 10, 5 and 6 study cases for cotton, wheat and maize, respectively. Mean seasonal amounts of crop water consumption in terms of ETc were 560±50, 509±27 and 243±39 mm for cotton, wheat and maize, respectively. The growth-stage-specific Kc for cotton, wheat and maize was 0.15, 0.27 and 0.11 at initial; 1.15, 1.03 and 0.56 at mid; and 0.45, 0.89 and 0.53 at late season stages. These values correspond to those reported by the FAO-56. Development of site specific Kc helps tremendously in irrigation management and furthermore provides precise water applications in the region. The developed simple approach to estimate daily Kc for the three main crops grown in the Fergana region was a first attempt to meet this issue. Keywords: Actual crop evapotranspiration, evaporation and transpiration, crop coefficient, model BUDGET, Fergana Valley.
Directory of Open Access Journals (Sweden)
Pejović Branko B.
2014-01-01
Full Text Available Starting from the expression for the thrust in the field of full utilization cross section scrapings and tool life, an expression is derived for the maximum required thrust of universal machine. Then, using a working diagram the analysis of the main features of the simultaneous utilization of machines was performed and determined the optimal area of its utilization for given optimal diameter of treatment. Based on this, the well-known machine using its structural details and the corresponding function workability, derived relations to determine the optimal coefficient slenderness of scrapings suitable for practical use. In this case we think that is known critical material for work piece. Considering the critical and authoritative material of the work piece, based on the expression for the cutting speed was determined by the characteristic constant workability as the basis for establishing optimum tool material which is adequate for optimum regime. Both obtained relation can be considered as general model that can be applied directly to solving setting problems. Also, given the possibilities of practical application of the presented relations, especially in the case of other typical kinds of treatment. Finally, the model is verified on a one calculation example from practice for a Specific machine tool, where certain important characteristics of the optimal treatment are defined.
An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems
International Nuclear Information System (INIS)
Nakae, Nobuo; Ozawa, Takayuki; Ohta, Hirokazu; Ogata, Takanari; Sekimoto, Hiroshi
2014-01-01
One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel
Džunková, Mária; D'Auria, Giuseppe; Pérez-Villarroya, David; Moya, Andrés
2012-01-01
Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be "domesticated" for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7-15 kb) cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs) with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts.
Directory of Open Access Journals (Sweden)
Mária Džunková
Full Text Available Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be "domesticated" for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7-15 kb cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts.
An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems
Energy Technology Data Exchange (ETDEWEB)
Nakae, Nobuo, E-mail: nakae-nobuo@jnes.go.jp [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan); Ozawa, Takayuki [Advanced Nuclear System Research and Development Directorate, Japan Atomic Energy Agency, 4-33, Muramatsu, Tokai-mura, Ibaraki-ken 319-1194 (Japan); Ohta, Hirokazu; Ogata, Takanari [Nuclear Technology Research Laboratory, Central Research Institute of Electric Power Industry, 2-11-1, Iwado Kita, Komae-shi, Tokyo 201-8511 (Japan); Sekimoto, Hiroshi [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan)
2014-03-15
One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel.
Directory of Open Access Journals (Sweden)
E. D. Chertov
2016-01-01
Full Text Available Summary. The analysis of cryogenic installations confirms objective regularity of increase in amount of the tasks solved by systems of a special purpose. One of the most important directions of development of a cryogenics is creation of installations for air separation product receipt, namely oxygen and nitrogen. Modern aviation complexes require use of these gases in large numbers as in gaseous, and in the liquid state. The onboard gas systems applied in aircraft of the Russian Federation are subdivided on: oxygen system; air (nitric system; system of neutral gas; fire-proof system. Technological schemes ADI are in many respects determined by pressure of compressed air or, in a general sense, a refrigerating cycle. For the majority ADI a working body of a refrigerating cycle the divided air is, that is technological and refrigerating cycles in installation are integrated. By this principle differentiate installations: low pressure; average and high pressure; with detander; with preliminary chilling. There is also insignificant number of the ADI types in which refrigerating and technological cycles are separated. These are installations with external chilling. For the solution of tasks of control of technical condition of the BRV hardware in real time and estimates of indicators of reliability it is offered to use multi-agent technologies. Multi-agent approach is the most acceptable for creation of SPPR for reliability assessment as allows: to redistribute processing of information on elements of system that leads to increase in overall performance; to solve a problem of accumulating, storage and recycling of knowledge that will allow to increase significantly efficiency of the solution of tasks of an assessment of reliability; to considerably reduce intervention of the person in process of functioning of system that will save time of the person of the making decision (PMD and will not demand from it special skills of work with it.
Karl, Florian M; Smith, Jennifer; Piedt, Shannon; Turcotte, Kate; Pike, Ian
2017-08-05
Bicycle injuries are of concern in Canada. Since helmet use was mandated in 1996 in the province of British Columbia, Canada, use has increased and head injuries have decreased. Despite the law, many cyclists do not wear a helmet. Health action process approach (HAPA) model explains intention and behaviour with self-efficacy, risk perception, outcome expectancies and planning constructs. The present study examines the impact of a social marketing campaign on HAPA constructs in the context of bicycle helmet use. A questionnaire was administered to identify factors determining helmet use. Intention to obey the law, and perceived risk of being caught if not obeying the law were included as additional constructs. Path analysis was used to extract the strongest influences on intention and behaviour. The social marketing campaign was evaluated through t-test comparisons after propensity score matching and generalised linear modelling (GLM) were applied to adjust for the same covariates. 400 cyclists aged 25-54 years completed the questionnaire. Self-efficacy and Intention were most predictive of intention to wear a helmet, which, moderated by planning, strongly predicted behaviour. Perceived risk and outcome expectancies had no significant impact on intention. GLM showed that exposure to the campaign was significantly associated with higher values in self-efficacy, intention and bicycle helmet use. Self-efficacy and planning are important points of action for promoting helmet use. Social marketing campaigns that remind people of appropriate preventive action have an impact on behaviour. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Applying the competence-based approach to management in the aerospace industry
Arpentieva Mariam; Duvalina Olga; Braitseva Svetlana; Gorelova Irina; Rozhnova Anna
2018-01-01
Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource ...
Ahmad, Ahmad F; Abbas, Zulkifly; Obaiys, Suzan J; Ibrahim, Norazowa; Hashim, Mansor; Khaleel, Haider
2015-01-01
Bio-composites of oil palm empty fruit bunch (OPEFB) fibres and polycaprolactones (PCL) with a thickness of 1 mm were prepared and characterized. The composites produced from these materials are low in density, inexpensive, environmentally friendly, and possess good dielectric characteristics. The magnitudes of the reflection and transmission coefficients of OPEFB fibre-reinforced PCL composites with different percentages of filler were measured using a rectangular waveguide in conjunction with a microwave vector network analyzer (VNA) in the X-band frequency range. In contrast to the effective medium theory, which states that polymer-based composites with a high dielectric constant can be obtained by doping a filler with a high dielectric constant into a host material with a low dielectric constant, this paper demonstrates that the use of a low filler percentage (12.2%OPEFB) and a high matrix percentage (87.8%PCL) provides excellent results for the dielectric constant and loss factor, whereas 63.8% filler material with 36.2% host material results in lower values for both the dielectric constant and loss factor. The open-ended probe technique (OEC), connected with the Agilent vector network analyzer (VNA), is used to determine the dielectric properties of the materials under investigation. The comparative approach indicates that the mean relative error of FEM is smaller than that of NRW in terms of the corresponding S21 magnitude. The present calculation of the matrix/filler percentages endorses the exact amounts of substrate utilized in various physics applications.
New approach for validating the segmentation of 3D data applied to individual fibre extraction
DEFF Research Database (Denmark)
Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen
2017-01-01
We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...
International Nuclear Information System (INIS)
Taghavifar, Hamid; Mardani, Aref
2014-01-01
This paper examines the prediction of energy efficiency indices of driven wheels (i.e. traction coefficient and tractive power efficiency) as affected by wheel load, slippage and forward velocity at three different levels with three replicates to form a total of 162 data points. The pertinent experiments were carried out in the soil bin testing facility. A feed-forward ANN (artificial neural network) with standard BP (back propagation) algorithm was practiced to construct a supervised representation to predict the energy efficiency indices of driven wheels. It was deduced, in view of the statistical performance criteria (i.e. MSE (mean squared error) and R 2 ), that a supervised ANN with 3-8-10-2 topology and Levenberg–Marquardt training algorithm represented the optimal model. Modeling implementations indicated that ANN is a powerful technique to prognosticate the stochastic energy efficiency indices as affected by soil-wheel interactions with MSE of 0.001194 and R 2 of 0.987 and 0.9772 for traction coefficient and tractive power efficiency. It was found that traction coefficient and tractive power efficiency increase with increased slippage. A similar trend is valid for the influence of wheel load on the objective parameters. Wherein increase of velocity led to an increment of tractive power efficiency, velocity had no significant effect on traction coefficient. - Highlights: • Energy efficiency indexes were assessed as affected by tire parameters. • ANN was applied for prognostication of the objective parameters. • A 3-8-10-2 ANN with MSE of 0.001194 and R 2 of 0.987 and 0.9772 was designated as optimal model. • Optimal values of learning rate and momentum were found 0.9 and 0.5, respectively
Lesellier, E; Mith, D; Dubrulle, I
2015-12-04
Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when
Applying the archetype approach to the database of a biobank information management system.
Späth, Melanie Bettina; Grimson, Jane
2011-03-01
The purpose of this study is to investigate the feasibility of applying the openEHR archetype approach to modelling the data in the database of an existing proprietary biobank information management system. A biobank information management system stores the clinical/phenotypic data of the sample donor and sample related information. The clinical/phenotypic data is potentially sourced from the donor's electronic health record (EHR). The study evaluates the reuse of openEHR archetypes that have been developed for the creation of an interoperable EHR in the context of biobanking, and proposes a new set of archetypes specifically for biobanks. The ultimate goal of the research is the development of an interoperable electronic biomedical research record (eBMRR) to support biomedical knowledge discovery. The database of the prostate cancer biobank of the Irish Prostate Cancer Research Consortium (PCRC), which supports the identification of novel biomarkers for prostate cancer, was taken as the basis for the modelling effort. First the database schema of the biobank was analyzed and reorganized into archetype-friendly concepts. Then, archetype repositories were searched for matching archetypes. Some existing archetypes were reused without change, some were modified or specialized, and new archetypes were developed where needed. The fields of the biobank database schema were then mapped to the elements in the archetypes. Finally, the archetypes were arranged into templates specifically to meet the requirements of the PCRC biobank. A set of 47 archetypes was found to cover all the concepts used in the biobank. Of these, 29 (62%) were reused without change, 6 were modified and/or extended, 1 was specialized, and 11 were newly defined. These archetypes were arranged into 8 templates specifically required for this biobank. A number of issues were encountered in this research. Some arose from the immaturity of the archetype approach, such as immature modelling support tools
International Nuclear Information System (INIS)
Deniz, V.C.
1980-01-01
The problem concerned with the correct definition of the homogenized diffusion coefficient of a lattice, and the concurrent problem of whether or not a homogenized diffusion equation can be formally set up, is studied by a space-energy-angle dependent treatment for a general lattice cell using an operator notation which applies to any eigen-problem. A new definition of the diffusion coefficient is given, which combines within itself the individual merits of the two definitions of Benoist. The relation between the new coefficient and the ''uncorrected'' Benoist coefficient is discussed by considering continuous-spectrum and multi-group diffusion equations. Other definitions existing in the literature are briefly discussed. It is concluded that a diffusion coefficient should represent only leakage effects. A comparison is made between the homogenization approach and the approach via eigen-coefficients, and brief indications are given of a possible scheme for the latter. (author)
Applying the competence-based approach to management in the aerospace industry
Directory of Open Access Journals (Sweden)
Arpentieva Mariam
2018-01-01
Full Text Available Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource management in context strategic management of the aerospace organization. To achieve this goal it is possible to obtain the method of comparative analysis. The article compares two approaches to personnel management. The transition to competence-based human resource management means (a a different understanding of the object of management; (b involvement in all functions of human resource management «knowledge – skills – abilities» of the employee; (c to change the approach to strategic management aerospace industry.
Frontolateral Approach Applied to Sellar Region Lesions: A Retrospective Study in 79 Patients
Directory of Open Access Journals (Sweden)
Hao-Cheng Liu
2016-01-01
Conclusions: FLA was an effective approach in the treatment of sellar region lesions with good preservation of visual function. FLA classification enabled tailored craniotomies for each patient according to the anatomic site of tumor invasion. This study found that FLA had similar outcomes to other surgical approaches of sellar region lesions.
An Optimisation Approach Applied to Design the Hydraulic Power Supply for a Forklift Truck
DEFF Research Database (Denmark)
Pedersen, Henrik Clemmensen; Andersen, Torben Ole; Hansen, Michael Rygaard
2004-01-01
-level optimisation approach, and is in the current paper exemplified through the design of the hydraulic power supply for a forklift truck. The paper first describes the prerequisites for the method and then explains the different steps in the approach to design the hydraulic system. Finally the results...
Directory of Open Access Journals (Sweden)
Ahmad F Ahmad
Full Text Available Bio-composites of oil palm empty fruit bunch (OPEFB fibres and polycaprolactones (PCL with a thickness of 1 mm were prepared and characterized. The composites produced from these materials are low in density, inexpensive, environmentally friendly, and possess good dielectric characteristics. The magnitudes of the reflection and transmission coefficients of OPEFB fibre-reinforced PCL composites with different percentages of filler were measured using a rectangular waveguide in conjunction with a microwave vector network analyzer (VNA in the X-band frequency range. In contrast to the effective medium theory, which states that polymer-based composites with a high dielectric constant can be obtained by doping a filler with a high dielectric constant into a host material with a low dielectric constant, this paper demonstrates that the use of a low filler percentage (12.2%OPEFB and a high matrix percentage (87.8%PCL provides excellent results for the dielectric constant and loss factor, whereas 63.8% filler material with 36.2% host material results in lower values for both the dielectric constant and loss factor. The open-ended probe technique (OEC, connected with the Agilent vector network analyzer (VNA, is used to determine the dielectric properties of the materials under investigation. The comparative approach indicates that the mean relative error of FEM is smaller than that of NRW in terms of the corresponding S21 magnitude. The present calculation of the matrix/filler percentages endorses the exact amounts of substrate utilized in various physics applications.
International Nuclear Information System (INIS)
Gorbatchev, A.; Goetsch, D.; Redko, V.; Madonna, A.
2003-01-01
Many of the planned upgrading measures of Ukrainian VVER plants and of the unique Armenian power plant (Medzanor) are financed by the European Union (EU) through the TACIS program. The ''2+2'' approach implies a deep collaboration between Ukrainian or Armenian regulatory authorities, local operating organizations and EU organizations. This approach allows: - a smooth adaptation of western technologies to VVERs, - a comprehensive checking of Ukrainian, Armenian and western regulatory requirements, and - the transfer of know-how to the Ukrainian and Armenian organizations. This report presents the principles applied for ''2+2'' approach as well as a summary of the main recommendations given in the framework of the licensing process
Kapici, Hasan Ozgur; Akcay, Hakan; Yager, Robert E.
2017-01-01
It is important for students to learn concepts and using them for solving problems and further learning. Within this respect, the purpose of this study is to investigate students' abilities to apply science concepts that they have learned from Science-Technology-Society based approach or textbook oriented instruction. Current study is based on…
Determination of aerodynamic sensitivity coefficients for wings in transonic flow
Carlson, Leland A.; El-Banna, Hesham M.
1992-01-01
The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.
Zedler, Sarah; Kanschat, Guido; Korty, Robert L.; Hoteit, Ibrahim
2011-01-01
forward models of the ocean's response to a tropical cyclone, whereby the probability density function of drag coefficient values as a function of wind speed that results from adding realistic levels of noise to the simulated ocean response variables
Energy Technology Data Exchange (ETDEWEB)
Takata, Hyoe, E-mail: takata@kaiseiken.or.jp [Marine Ecology Research Institute, Central Laboratory, Onjuku, Chiba (Japan); National Institute of Radiological Sciences, Chiba City, Chiba (Japan); Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo [National Institute of Radiological Sciences, Chiba City, Chiba (Japan)
2016-02-01
In numerical models to simulate the dispersion of anthropogenic radionuclides in the marine environment, the sediment–seawater distribution coefficient (K{sub d}) for various elements is an important parameter. In coastal regions, K{sub d} values are largely dependent on hydrographic conditions and physicochemical characteristics of sediment. Here we report K{sub d} values for 36 elements (Na, Mg, Al, K, Ca, V, Mn, Fe, Co, Ni, Cu, Se, Rb, Sr, Y, Mo, Cd, I, Cs, rare earth elements, Pb, {sup 232}Th and {sup 238}U) in seawater and sediment samples from 19 Japanese coastal regions, and we examine the factors controlling the variability of these K{sub d} values by investigating their relationships to hydrographic conditions and sediment characteristics. There was large variability in K{sub d} values for Al, Mn, Fe, Co, Ni, Cu, Se, Cd, I, Pb and Th. Variations of K{sub d} for Al, Mn, Fe, Co, Pb and Th appear to be controlled by hydrographic conditions. Although K{sub d} values for Ni, Cu, Se, Cd and I depend mainly on grain size, organic matter content, and the concentrations of hydrous oxides/oxides of Fe and Mn in sediments, heterogeneity in the surface characteristics of sediment particles appears to hamper evaluation of the relative importance of these factors. Thus, we report a new approach to evaluate the factors contributing to variability in K{sub d} for an element. By this approach, we concluded that the K{sub d} values for Cu, Se, Cd and I are controlled by grain size and organic matter in sediments, and the K{sub d} value for Ni is dependent on grain size and on hydrous oxides/oxides of Fe and Mn. - Highlights: • K{sub d}s for 36 elements were determined in 19 Japanese coastal regions. • K{sub d}s for several elements appeared to be controlled by multiple factors in sediments. • We evaluated these factors based on physico-chemical characteristics of sediments.
A Structured Approach to Teaching Applied Problem Solving through Technology Assessment.
Fischbach, Fritz A.; Sell, Nancy J.
1986-01-01
Describes an approach to problem solving based on real-world problems. Discusses problem analysis and definitions, preparation of briefing documents, solution finding techniques (brainstorming and synectics), solution evaluation and judgment, and implementation. (JM)
Amany AlShawi
2016-01-01
Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...
Albalak, Rachel
2009-01-01
This article describes two large, multisite infectious disease programs: the Tuberculosis Epidemiologic Studies Consortium (TBESC) and the Emerging Infections Programs (EIPs). The links between biological anthropology and applied public health are highlighted using these programs as examples. Funded by the Centers for Disease Control and Prevention (CDC), the TBESC and EIPs conduct applied public health research to strengthen infectious disease prevention and control efforts in the United States. They involve collaborations among CDC, public health departments, and academic and clinical institutions. Their unique role in national infectious disease work, including their links to anthropology, shared elements, key differences, strengths and challenges, is discussed.
Applying Rawlsian Approaches to Resolve Ethical Issues : Inventory and Setting of a Research Agenda
Doorn, N.
2009-01-01
Insights from social science are increasingly used in the field of applied ethics. However, recent insights have shown that the empirical branch of business ethics lacks thorough theoretical grounding. This article discusses the use of the Rawlsian methods of wide reflective equilibrium and
Fluid Intelligence as a Predictor of Learning: A Longitudinal Multilevel Approach Applied to Math
Primi, Ricardo; Ferrao, Maria Eugenia; Almeida, Leandro S.
2010-01-01
The association between fluid intelligence and inter-individual differences was investigated using multilevel growth curve modeling applied to data measuring intra-individual improvement on math achievement tests. A sample of 166 students (88 boys and 78 girls), ranging in age from 11 to 14 (M = 12.3, SD = 0.64), was tested. These individuals took…
Storberg-Walker, Julia
2007-01-01
This article presents a provisional grounded theory of conceptual development for applied theory-building research. The theory described here extends the understanding of the components of conceptual development and provides generalized relations among the components. The conceptual development phase of theory-building research has been widely…
A Transfer Learning Approach for Applying Matrix Factorization to Small ITS Datasets
Voß, Lydia; Schatten, Carlotta; Mazziotti, Claudia; Schmidt-Thieme, Lars
2015-01-01
Machine Learning methods for Performance Prediction in Intelligent Tutoring Systems (ITS) have proven their efficacy; specific methods, e.g. Matrix Factorization (MF), however suffer from the lack of available information about new tasks or new students. In this paper we show how this problem could be solved by applying Transfer Learning (TL),…
Grégory, Dubourg; Chaudet, Hervé; Lagier, Jean-Christophe; Raoult, Didier
2018-03-01
Describing the human hut gut microbiota is one the most exciting challenges of the 21 st century. Currently, high-throughput sequencing methods are considered as the gold standard for this purpose, however, they suffer from several drawbacks, including their inability to detect minority populations. The advent of mass-spectrometric (MS) approaches to identify cultured bacteria in clinical microbiology enabled the creation of the culturomics approach, which aims to establish a comprehensive repertoire of cultured prokaryotes from human specimens using extensive culture conditions. Areas covered: This review first underlines how mass spectrometric approaches have revolutionized clinical microbiology. It then highlights the contribution of MS-based methods to culturomics studies, paying particular attention to the extension of the human gut microbiota repertoire through the discovery of new bacterial species. Expert commentary: MS-based approaches have enabled cultivation methods to be resuscitated to study the human gut microbiota and thus to fill in the blanks left by high-throughput sequencing methods in terms of culturing minority populations. Continued efforts to recover new taxa using culture methods, combined with their rapid implementation in genomic databases, would allow for an exhaustive analysis of the gut microbiota through the use of a comprehensive approach.
Jazar, Reza
2015-01-01
This book focuses on the latest applications of nonlinear approaches in different disciplines of engineering. For each selected topic, detailed concept development, derivations, and relevant knowledge are provided for the convenience of the readers. The topics range from dynamic systems and control to optimal approaches in nonlinear dynamics. The volume includes invited chapters from world class experts in the field. The selected topics are of great interest in the fields of engineering and physics and this book is ideal for engineers and researchers working in a broad range of practical topics and approaches. This book also: · Explores the most up-to-date applications and underlying principles of nonlinear approaches to problems in engineering and physics, including sections on analytic nonlinearity and practical nonlinearity · Enlightens readers to the conceptual significance of nonlinear approaches with examples of applications in scientific and engineering problems from v...
International Nuclear Information System (INIS)
Santos Coelho, Leandro dos; Mariani, Viviana Cocco
2009-01-01
The economic dispatch problem (EDP) is an optimization problem useful in power systems operation. The objective of the EDP of electric power generation, whose characteristics are complex and highly non-linear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying system constraints. Recently, as an alternative to the conventional mathematical approaches, modern heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. As special mechanism to avoid being trapped in local minimum, the ergodicity property of chaotic sequences has been used as optimization technique in EDPs. Based on the chaos theory, this paper discusses the design and validation of an optimization procedure based on a chaotic artificial immune network approach based on Zaslavsky's map. The optimization approach based on chaotic artificial immune network is validated for a test system consisting of 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. Simulation results and comparisons show that the chaotic artificial immune network approach is competitive in performance with other optimization approaches presented in literature and is also an attractive tool to be used on applications in the power systems field.
The flux-coordinate independent approach applied to X-point geometries
International Nuclear Information System (INIS)
Hariri, F.; Hill, P.; Ottaviani, M.; Sarazin, Y.
2014-01-01
A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries
Williams, A Mark; Ericsson, K Anders
2005-06-01
The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.
Boudou, Martin; Lang, Michel; Vinet, Freddy; Coeur, Denis
2014-05-01
emphasize one flood typology or one flood dynamic (for example flash floods are often over-represented than slow dynamic floods in existing databases). Thus, the selected criteria have to introduce a general overview of flooding risk in France by integrating all typologies: storm surges, torrential floods, rising groundwater level and resulting to flood, etc. The methodology developed for the evaluation grid is inspired by several scientific works related to historical hydrology (Bradzil, 2006; Benito et al., 2004) or extreme floods classification (Kundzewics et al. 2013; Garnier E., 2005). The referenced information are mainly issued from investigations realized for the PFRA (archives, local data),from internet databases on flooding disasters, and from a complementary bibliography (some scientists such as Maurice Pardé a geographer who largely documented French floods during the 20th century). The proposed classification relies on three main axes. Each axis is associated to a set of criteria, each one related to a score (from 0.5 to 4 points), and pointing out a final remarkability score. • The flood intensity characterizing the flood's hazard level. It is composed of the submersion duration, important to valorize floods with slow dynamics as flooding from groundwater, the event peak discharge's return period, and the presence of factors increasing significantly the hazard level (dykes breaks, log jam, sediment transport…) • The flood severity focuses on economic damages, social and political repercussions, media coverage of the event, fatalities number or eventual flood warning failures. Analyzing the flood consequences is essential in order to evaluate the vulnerability of society at disaster date. • The spatial extension of the flood, which contributes complementary information to the two first axes. The evaluation grid was tested and applied on the sample of 176 remarkable events. Around twenty events (from 1856 to 2010) come out with a high remarkability rate
Applied anatomy of a new approach of endoscopic technique in thyroid gland surgery.
Liu, Hong; Xie, Yong-jun; Xu, Yi-quan; Li, Chao; Liu, Xing-guo
2012-10-01
To explore the feasibility and safety of transtracheal assisted sublingual approach to totally endoscopic thyroidectomy by studying the anatomical approach and adjacent structures. A total of 5 embalmed adult cadavers from Chengdu Medical College were dissected layer by layer in the cervical region, pharyngeal region, and mandible region, according to transtracheal assisted sublingual approach that was verified from the anatomical approach and planes. A total of 15 embalmed adult cadavers were dissected by arterial vascular casting technique, imaging scanning technique, and thin layer cryotomy. Then the vessel and anatomical structures of thyroid surgical region were analyzed qualitatively and quantitatively. Three-dimensional visualization of larynx artery was reconstructed by Autodesk 3ds Max 2010(32). Transtracheal assisted sublingual approach for totally endoscopic thyroidectomy was simulated on 5 embalmed adult cadavers. The sublingual observed access was located in the middle of sublingual region. The geniohyoid muscle, mylohyoid seam, and submental triangle were divided in turn in the middle to reach the plane under the plastima muscles. Superficial cervical fascia, anterior body of hyoid bone, and infrahyoid muscles were passed in sequence to reach thyroid gland surgical region. The transtracheal operational access was placed from the cavitas oris propria, isthmus faucium, subepiglottic region, laryngeal pharynx, and intermediate laryngeal cavit, and then passed from the top down in order to reach pars cervicalis tracheae where a sagittal incision was made in the anterior wall of cartilagines tracheales to reach a ascertained surgical region. Transtracheal assisted sublingual approach to totally endoscopic thyroidectomy is anatomically feasible and safe and can be useful in thyroid gland surgery.
An approach using quantum ant colony optimization applied to the problem of nuclear reactors reload
International Nuclear Information System (INIS)
Silva, Marcio H.; Lima, Alan M.M. de; Schirru, Roberto; Medeiros, J.A.C.C.
2009-01-01
The basic concept behind the nuclear reactor fuel reloading problem is to find a configuration of new and used fuel elements, to keep the plant working at full power by the largest possible duration, within the safety restrictions. The main restriction is the power peaking factor, which is the limit value for the preservation of the fuel assembly. The QACO A lfa algorithm is a modified version of Quantum Ant Colony Optimization (QACO) proposed by Wang et al, which uses a new actualization method and a pseudo evaporation step. We examined the QACO A lfa behavior associated to physics of reactors code RECNOD when applied to this problem. Although the QACO have been developed for continuous functions, the binary model used in this work allows applying it to discrete problems, such as the mentioned above. (author)
Directory of Open Access Journals (Sweden)
Amany AlShawi
2016-01-01
Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.
Post, L.J.G.; Roos, M.; Marshall, M.S.; van Driel, R.; Breit, T.M.
2007-01-01
The numerous public data resources make integrative bioinformatics experimentation increasingly important in life sciences research. However, it is severely hampered by the way the data and information are made available. The semantic web approach enhances data exchange and integration by providing
Applying adaptive management in resource use in South African National Parks: A case study approach
Directory of Open Access Journals (Sweden)
Kelly Scheepers
2011-05-01
Conservation implications: There is no blueprint for the development of sustainable resource use systems and resource use is often addressed according to multiple approaches in national parks. However, the SANParks resource use policy provides a necessary set of guiding principles for resource use management across the national park system that allows for monitoring progress.
Wildy, Helen; Pepper, Coral
2005-01-01
Dissatisfaction with long lists of duties as substitutes for standards led to the innovative application of narratives as an alternative approach to the generation and use of standards for school leaders. This paper describes research conducted over nearly a decade in collaboration with the state education authority in Western Australia,…
A single grain approach applied to modelling recrystallization kinetics in a single-phase metal
Chen, S.P.; Zwaag, van der S.
2004-01-01
A comprehensive model for the recrystallization kinetics is proposed which incorporates both microstructure and the textural components in the deformed state. The model is based on the single-grain approach proposed previously. The influence of the as-deformed grain orientation, which affects the
D.F. de Korne (Dirk); J.C.A. Sol (Kees); T. Custers (Thomas); E. van Sprundel (Esther); B.M. van Ineveld (Martin); H.G. Lemij (Hans); N.S. Klazinga (Niek)
2009-01-01
textabstractPurpose: The purpose of this paper is to explore in a specific hospital care process the applicability in practice of the theories of quality costing and value chains. Design/methodology/approach: In a retrospective case study an in-depth evaluation of the use of a quality cost model
A systematic approach for fine-tuning of fuzzy controllers applied to WWTPs
DEFF Research Database (Denmark)
Ruano, M.V.; Ribes, J.; Sin, Gürkan
2010-01-01
A systematic approach for fine-tuning fuzzy controllers has been developed and evaluated for an aeration control system implemented in a WWTR The challenge with the application of fuzzy controllers to WWTPs is simply that they contain many parameters, which need to be adjusted for different WWTP ...
Oosterling, Iris J.; Wensing, Michel; Swinkels, Sophie H.; van der Gaag, Rutger Jan; Visser, Janne C.; Woudenberg, Tim; Minderaa, Ruud; Steenhuis, Mark-Peter; Buitelaar, Jan K.
Background: Few field trials exist on the impact of implementing guidelines for the early detection of autism spectrum disorders (ASD). The aims of the present study were to develop and evaluate a clinically relevant integrated early detection programme based on the two-stage screening approach of
Improving the efficiency of a chemotherapy day unit: Applying a business approach to oncology
van Lent, W.A.M.; Goedbloed, N.; van Harten, Willem H.
2009-01-01
Aim: To improve the efficiency of a hospital-based chemotherapy day unit (CDU). - Methods: The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean
Standards for Standardized Logistic Regression Coefficients
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
DEFF Research Database (Denmark)
Arndt, Channing; Mahrt, Kristi; Hussain, Azhar
2017-01-01
is in reality inconsistent with the Universal Declaration of Human Rights principles of indivisibility, inalienability, and equality. We show that a first-order dominance methodology maintains consistency with basic principles, discuss the properties of the multidimensional poverty index and first......The rights-based approach to development targets progress towards the realization of 30 articles set forth in the Universal Declaration of Human Rights. Progress is frequently measured using the multidimensional poverty index. While elegant and useful, the multidimensional poverty index...
A Hybrid Approach to the Valuation of RFID/MEMS technology applied to ordnance inventory
Doerr, Kenneth H.; Gates, William R.; Mutty, John E.
2006-01-01
We report on an analysis of the costs and benefits of fielding Radio Frequency Identification / MicroElectroMechanical System (RFID /MEMS) technology for the management of ordnance inventory. A factorial model of these benefits is proposed. Our valuation approach combines a multi-criteria tool for the valuation of qualitative factors with a monte-carlo simulation of anticipated financial factors. In a sample survey, qualitative factors are shown to account of over half of the anticipated bene...
A Guttman-Based Approach to Identifying Cumulativeness Applied to Chimpanzee Culture
Graber, RB; de Cock, DR; Burton, ML
2012-01-01
Human culture appears to build on itself-that is, to be to some extent cumulative. Whether this property is shared by culture in the common chimpanzee is controversial. The question previously has been approached, qualitatively (and inconclusively), by debating whether any chimpanzee culture traits have resulted from individuals building on one another's work ("ratcheting"). The fact that the chimpanzees at different sites have distinctive repertoires of traits affords a different avenue of a...
In vitro approach to studying cutaneous metabolism and disposition of topically applied xenobiotics
International Nuclear Information System (INIS)
Kao, J.; Hall, J.; Shugart, L.R.; Holland, J.M.
1984-01-01
The extent to which cutaneous metabolism may be involved in the penetration and fate of topically applied xenobiotics was examined by metabolically viable and structurally intact mouse skin in organ culture. Evidence that skin penetration of certain chemicals is coupled to cutaneous metabolism was based upon observations utilizing [ 14 C]benzo[a]pyrene (BP). As judged by the recovery of radioactivity in the culture medium 24 hr after in vitro topical application of [ 14 C]BP to the skin from both control and 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD)-induced C3H mice, skin penetration of BP was higher in the induced tissue. All classes of metabolites of BP were found in the culture medium; water-soluble metabolites predominated and negligible amounts of unmetabolized BP were found. As shown by enzymatic hydrolysis of the medium, TCDD induction resulted in shifting the cutaneous metabolism of BP toward the synthesis of more water-soluble conjugates. Differences in the degree of covalent binding of BP, via diol epoxide intermediates to epidermal DNA, from control and induced tissues were observed. These differences may reflect a change in the pathways of metabolism as a consequence of TCDD induction. These results indicated that topically applied BP is metabolized by the skin during its passage through the skin; and the degree of percutaneous penetration and disposition of BP was dependent upon the metabolic status of the tissue. This suggests that cutaneous metabolism may play an important role in the translocation and subsequent physiological disposition of topically applied BP. 33 references, 5 figures, 2 tables
Overuse tendinosis, not tendinitis part 2: applying the new approach to patellar tendinopathy.
Cook, J L; Khan, K M; Maffulli, N; Purdam, C
2000-06-01
Patellar tendinopathy causes substantial morbidity in both professional and recreational athletes. The condition is most common in athletes of jumping sports such as basketball and volleyball, but it also occurs in soccer, track, and tennis athletes. The disorder arises most often from collagen breakdown rather than inflammation, a tendinosis rather than a tendinitis. Physicians must address the degenerative pathology underlying patellar tendinopathy because regimens that seek to minimize (nonexistent) inflammation would appear illogical. Suggestions for applying the 'tendinosis paradigm' to patellar tendinopathy management include conservative measures such as load reduction, strengthening exercises, and massage. Surgery should be considered only after a long-term and appropriate conservative regimen has failed.
Massaro, Alessandro
2012-01-01
Optoelectronics--technology based on applications light such as micro/nano quantum electronics, photonic devices, laser for measurements and detection--has become an important field of research. Many applications and physical problems concerning optoelectronics are analyzed in Optical Waveguiding and Applied Photonics.The book is organized in order to explain how to implement innovative sensors starting from basic physical principles. Applications such as cavity resonance, filtering, tactile sensors, robotic sensor, oil spill detection, small antennas and experimental setups using lasers are a
A risk analysis approach applied to field surveillance in utility meters in legal metrology
Rodrigues Filho, B. A.; Nonato, N. S.; Carvalho, A. D.
2018-03-01
Field surveillance represents the level of control in metrological supervision responsible for checking the conformity of measuring instruments in-service. Utility meters represent the majority of measuring instruments produced by notified bodies due to self-verification in Brazil. They play a major role in the economy once electricity, gas and water are the main inputs to industries in their production processes. Then, to optimize the resources allocated to control these devices, the present study applied a risk analysis in order to identify among the 11 manufacturers notified to self-verification, the instruments that demand field surveillance.
Directory of Open Access Journals (Sweden)
Cabriada-Nuño Jose
2010-06-01
Full Text Available Abstract Background In the last few years, a new non-pharmacological treatment, termed apheresis, has been developed to lessen the burden of ulcerative colitis (UC. Several methods can be used to establish treatment recommendations, but over the last decade an informal collaboration group of guideline developers, methodologists, and clinicians has developed a more sensible and transparent approach known as the Grading of Recommendations, Assessment, Development and Evaluation (GRADE. GRADE has mainly been used in clinical practice guidelines and systematic reviews. The aim of the present study is to describe the use of this approach in the development of recommendations for a new health technology, and to analyse the strengths, weaknesses, opportunities, and threats found when doing so. Methods A systematic review of the use of apheresis for UC treatment was performed in June 2004 and updated in May 2008. Two related clinical questions were selected, the outcomes of interest defined, and the quality of the evidence assessed. Finally, the overall quality of each question was taken into account to formulate recommendations following the GRADE approach. To evaluate this experience, a SWOT (strengths, weaknesses, opportunities and threats analysis was performed to enable a comparison with our previous experience with the SIGN (Scottish Intercollegiate Guidelines Network method. Results Application of the GRADE approach allowed recommendations to be formulated and the method to be clarified and made more explicit and transparent. Two weak recommendations were proposed to answer to the formulated questions. Some challenges, such as the limited number of studies found for the new technology and the difficulties encountered when searching for the results for the selected outcomes, none of which are specific to GRADE, were identified. GRADE was considered to be a more time-consuming method, although it has the advantage of taking into account patient
Recruiting the next generation: applying a values-based approach to recruitment.
Ritchie, Georgina; Ashworth, Lisa; Bades, Annette
2018-05-02
The qualified district nurse (DN) role demands high levels of leadership. Attracting the right candidates to apply for the Specialist Practice Qualification District Nursing (SPQDN) education programme is essential to ensure fitness to practice on qualification. Anecdotal evidence suggested that the traditional panel interview discouraged candidates from applying and a need to improve the quality of the overall interview process was identified by the authors. The University of Central Lancashire in partnership with Lancashire Care NHS Foundation Trust adopted the National Values Based Recruitment (VBR) Framework to select candidates to gain entry onto the SPQDN course. This involved using 'selection centres' of varying activities including a multiple mini interview, written exercise, group discussion, and portfolio review with scores attached to each centre. The ultimate aim of utilising VBR was to align personal and profession values to both the nursing profession and the Trust whilst allowing a fairer assessment process. An evaluation of the VBR recruitment process demonstrated 100% pass rate for the course and 100% satisfaction with the interview process reported by all 16 candidates over three academic years. Interviewer feedback showed deeper insight into the candidates' skills and values aligned with the core values and skills required by future District Nurse leaders within the Trust.
Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden
International Nuclear Information System (INIS)
Hill, M.
2001-01-01
This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework
Skill-Based Approach Applied to Gifted Students, its Potential in Latin America
Directory of Open Access Journals (Sweden)
Andrew Alexi Almazán-Anaya
2015-09-01
Full Text Available This paper presents, as a reflective essay, the current educational situation of gifted students (with more intelligence than the average in Latin America and the possibility of using skill-based education within differentiated programs (intended for gifted individuals, a sector where scarce scientific studies have been done and a consensus of an ideal educative model has not been reached yet. Currently these students, in general, lack of specialized educational assistance intended to identify and develop their cognitive abilities, so it is estimated that a high percentage (95% of such population is not detected in the traditional education system. Although there are differentiated education models, they are rarely applied. A student-centered education program is a solution proposed to apply this pedagogical model and cover such population. The characteristics of this program that do support differentiated instruction for gifted individuals compatible with experiences in the US, Europe and Latin America are analyzed. Finally, this paper concludes with an analysis of possible research areas that, if explored in the future, would help us to find answers about the feasibility and relation between skill-based programs and differentiated education for gifted students.
Energy Technology Data Exchange (ETDEWEB)
Credille, Jennifer [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States); Owens, Elizabeth [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States)
2017-10-11
This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restricted to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.
Shillito, Lisa-Marie; Blong, John C; Jenkins, Dennis L; Stafford Jr, Thomas W; Whelton, Helen; McDonough, Katelyn; Bull, Ian
2018-01-01
Paisley Caves in Oregon has become well known due to early dates, and human presence in the form of coprolites, found to contain ancient human DNA. Questions remain over whether the coprolites themselves are human, or whether the DNA is mobile in the sediments. This brief introduces new research applying an integrated analytical approach combining sediment micromorphology and lipid biomarker analysis, which aims to resolve these problems.
Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.
2012-01-01
There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.
The research of approaches of applying the results of big data analysis in higher education
Kochetkov, O. T.; Prokhorov, I. V.
2017-01-01
This article briefly discusses the approaches to the use of Big Data in the educational process of higher educational institutions. There is a brief description of nature of big data, their distribution in the education industry and new ways to use Big Data as part of the educational process are offered as well. This article describes a method for the analysis of the relevant requests by using Yandex.Wordstat (for laboratory works on the processing of data) and Google Trends (for actual pictures of interest and preference in a higher education institution).
Multidisciplinary approach of early breast cancer: The biology applied to radiation oncology
International Nuclear Information System (INIS)
Bourgier, Céline; Ozsahin, Mahmut; Azria, David
2010-01-01
Early breast cancer treatment is based on a multimodality approach with the application of clinical and histological prognostic factors to determine locoregional and systemic treatments. The entire scientific community is strongly involved in the management of this disease: radiologists for screening and early diagnosis, gynecologists, surgical oncologists and radiation oncologists for locoregional treatment, pathologists and biologists for personalized characterization, genetic counselors for BRCA mutation history and medical oncologists for systemic therapies. Recently, new biological tools have established various prognostic subsets of breast cancer and developed predictive markers for miscellaneous treatments. The aim of this article is to highlight the contribution of biological tools in the locoregional management of early breast cancer
Oliveira, Justine P R; Ortiz, H Ivan Melendez; Bucio, Emilio; Alves, Patricia Terra; Lima, Mayara Ingrid Sousa; Goulart, Luiz Ricardo; Mathor, Monica B; Varca, Gustavo H C; Lugao, Ademar B
2018-04-10
Safety and biocompatibility assessment of biomaterials are themes of constant concern as advanced materials enter the market as well as products manufactured by new techniques emerge. Within this context, this review provides an up-to-date approach on current methods for the characterization and safety assessment of biomaterials and biomedical devices from a physicalchemical to a biological perspective, including a description of the alternative methods in accordance with current and established international standards. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
A systematic approach applied in design of a micro heat exchanger
DEFF Research Database (Denmark)
Omidvarnia, Farzaneh; Hansen, Hans Nørgaard; Sarhadi, Ali
2016-01-01
The number of products benefiting from micro components in the market is increasing, and consequently, the demand for well-matched tools, equipment and systems with micro features is eventually increasing as well. During the design process of micro products, a number of issues appear which...... from the design process of the micro heat exchanger are added to the RTC unit and can be applied as guidelines in design pro- cess of any other micro heat exchanger. In other words, the current study can provide a useful guideline in design for manufacturing of micro products....... are inherent due to the down scaling or physical phenomena dominating in the micro range but negligible in the macro scale. In fact, some aspects in design for micro manufacturing are considerably different compared to the de- sign procedure taken at the macro level. Identifying the differences between design...
Olguin-Alvarez, M. I.; Wayson, C.; Fellows, M.; Birdsey, R.; Smyth, C.; Magnan, M.; Dugan, A.; Mascorro, V.; Alanís, A.; Serrano, E.; Kurz, W. A.
2017-12-01
Since 2012, the Mexican government through its National Forestry Commission, with support from the Commission for Environmental Cooperation, the Forest Services of Canada and USA, the SilvaCarbon Program and research institutes in Mexico, has made important progress towards the use of carbon dynamics models ("gain-loss" approach) for greenhouse gas (GHG) emissions monitoring and projections into the future. Here we assess the biophysical mitigation potential of policy alternatives identified by the Mexican Government (e.g. net zero deforestation rate, sustainable forest management) based on a systems approach that models carbon dynamics in forest ecosystems, harvested wood products and substitution benefits in two contrasting states of Mexico. We provide key messages and results derived from the use of the Carbon Budget Model of the Canadian Forest Sector and a harvested wood products model, parameterized with input data from Mexicós National Forest Monitoring System (e.g. forest inventories, remote sensing, disturbance data). The ultimate goal of this tri-national effort is to develop data and tools for carbon assessment in strategic landscapes in North America, emphasizing the need to include multiple sectors and types of collaborators (scientific and policy-maker communities) to design more comprehensive portfolios for climate change mitigation in accordance with the Paris Agreement of the United Nation Framework Convention on Climate Change (e.g. Mid-Century Strategy, NDC goals).
Energy Technology Data Exchange (ETDEWEB)
Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: vasconv@cdtn.br, e-mail: silvaem@cdtn.br, e-mail: aclc@cdtn.br, e-mail: reissc@cdtn.br
2009-07-01
Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)
Tollefsen, Knut Erik; Scholz, Stefan; Cronin, Mark T; Edwards, Stephen W; de Knecht, Joop; Crofton, Kevin; Garcia-Reyero, Natalia; Hartung, Thomas; Worth, Andrew; Patlewicz, Grace
2014-12-01
Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. Current approaches are too resource intensive in terms of time, money and animal use to evaluate all chemicals under development or already on the market. The need for timely and robust decision making demands that regulatory toxicity testing becomes more cost-effective and efficient. One way to realize this goal is by being more strategic in directing testing resources; focusing on chemicals of highest concern, limiting testing to the most probable hazards, or targeting the most vulnerable species. Hypothesis driven Integrated Approaches to Testing and Assessment (IATA) have been proposed as practical solutions to such strategic testing. In parallel, the development of the Adverse Outcome Pathway (AOP) framework, which provides information on the causal links between a molecular initiating event (MIE), intermediate key events (KEs) and an adverse outcome (AO) of regulatory concern, offers the biological context to facilitate development of IATA for regulatory decision making. This manuscript summarizes discussions at the Workshop entitled "Advancing AOPs for Integrated Toxicology and Regulatory Applications" with particular focus on the role AOPs play in informing the development of IATA for different regulatory purposes. Copyright © 2014 Elsevier Inc. All rights reserved.
Applying attachment theory to effective practice with hard-to-reach youth: the AMBIT approach.
Bevington, Dickon; Fuggle, Peter; Fonagy, Peter
2015-01-01
Adolescent Mentalization-Based Integrative Treatment (AMBIT) is a developing approach to working with "hard-to-reach" youth burdened with multiple co-occurring morbidities. This article reviews the core features of AMBIT, exploring applications of attachment theory to understand what makes young people "hard to reach," and provide routes toward increased security in their attachment to a worker. Using the theory of the pedagogical stance and epistemic ("pertaining to knowledge") trust, we show how it is the therapeutic worker's accurate mentalizing of the adolescent that creates conditions for new learning, including the establishment of alternative (more secure) internal working models of helping relationships. This justifies an individual keyworker model focused on maintaining a mentalizing stance toward the adolescent, but simultaneously emphasizing the critical need for such keyworkers to remain well connected to their wider team, avoiding activation of their own attachment behaviors. We consider the role of AMBIT in developing a shared team culture (shared experiences, shared language, shared meanings), toward creating systemic contexts supportive of such relationships. We describe how team training may enhance the team's ability to serve as a secure base for keyworkers, and describe an innovative approach to treatment manualization, using a wiki format as one way of supporting this process.
International Nuclear Information System (INIS)
Athayde Costa e Silva, Marsil de; Klein, Carlos Eduardo; Mariani, Viviana Cocco; Santos Coelho, Leandro dos
2013-01-01
The environmental/economic dispatch (EED) is an important daily optimization task in the operation of many power systems. It involves the simultaneous optimization of fuel cost and emission objectives which are conflicting ones. The EED problem can be formulated as a large-scale highly constrained nonlinear multiobjective optimization problem. In recent years, many metaheuristic optimization approaches have been reported in the literature to solve the multiobjective EED. In terms of metaheuristics, recently, scatter search approaches are receiving increasing attention, because of their potential to effectively explore a wide range of complex optimization problems. This paper proposes an improved scatter search (ISS) to deal with multiobjective EED problems based on concepts of Pareto dominance and crowding distance and a new scheme for the combination method. In this paper, we have considered the standard IEEE (Institute of Electrical and Electronics Engineers) 30-bus system with 6-generators and the results obtained by proposed ISS algorithm are compared with the other recently reported results in the literature. Simulation results demonstrate that the proposed ISS algorithm is a capable candidate in solving the multiobjective EED problems. - Highlights: ► Economic dispatch. ► We solve the environmental/economic economic power dispatch problem with scatter search. ► Multiobjective scatter search can effectively improve the global search ability
International Nuclear Information System (INIS)
Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos
2009-01-01
Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)
International Nuclear Information System (INIS)
Winkler, David A.; Mombelli, Enrico; Pietroiusti, Antonio; Tran, Lang; Worth, Andrew; Fadeel, Bengt; McCall, Maxine J.
2013-01-01
The potential (eco)toxicological hazard posed by engineered nanoparticles is a major scientific and societal concern since several industrial sectors (e.g. electronics, biomedicine, and cosmetics) are exploiting the innovative properties of nanostructures resulting in their large-scale production. Many consumer products contain nanomaterials and, given their complex life-cycle, it is essential to anticipate their (eco)toxicological properties in a fast and inexpensive way in order to mitigate adverse effects on human health and the environment. In this context, the application of the structure–toxicity paradigm to nanomaterials represents a promising approach. Indeed, according to this paradigm, it is possible to predict toxicological effects induced by chemicals on the basis of their structural similarity with chemicals for which toxicological endpoints have been previously measured. These structure–toxicity relationships can be quantitative or qualitative in nature and they can predict toxicological effects directly from the physicochemical properties of the entities (e.g. nanoparticles) of interest. Therefore, this approach can aid in prioritizing resources in toxicological investigations while reducing the ethical and monetary costs that are related to animal testing. The purpose of this review is to provide a summary of recent key advances in the field of QSAR modelling of nanomaterial toxicity, to identify the major gaps in research required to accelerate the use of quantitative structure–activity relationship (QSAR) methods, and to provide a roadmap for future research needed to achieve QSAR models useful for regulatory purposes
Applying the Analog Configurability Test Approach in a Wireless Sensor Network Application
Directory of Open Access Journals (Sweden)
Agustín Laprovitta
2014-01-01
Full Text Available This work addresses the application of the analog configurability test (ACT approach for an embedded analog configurable circuit (EACC, composed of operational amplifiers and interconnection resources that are embedded in the MSP430xG461x microcontrollers family. This test strategy is particularly useful for in-field application requiring reliability, safe operation, or fault tolerance characteristics. Our test proposal consists of programming a reduced set of available configurations for the EACC and testing its functionality by measuring only a few key parameters. The processor executes an embedded test routine that sequentially programs selected configurations, sets the test stimulus, acquires data from the internal ADC, and performs required calculations. The test approach is experimentally evaluated using an embedded system-based real application board. Our experimental results show very good repeatability, with very low errors. These results show that the ACT proposed here is useful for testing the functionality of the circuit under test in a real application context by using a simple strategy at a very low cost.
Vacquie, Laure; Houet, Thomas
2016-04-01
In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging
Applying systems theory to the evaluation of a whole school approach to violence prevention.
Kearney, Sarah; Leung, Loksee; Joyce, Andrew; Ollis, Debbie; Green, Celia
2016-02-01
Issue addressed Our Watch led a complex 12-month evaluation of a whole school approach to Respectful Relationships Education (RRE) implemented in 19 schools. RRE is an emerging field aimed at preventing gender-based violence. This paper will illustrate how from an implementation science perspective, the evaluation was a critical element in the change process at both a school and policy level. Methods Using several conceptual approaches from systems science, the evaluation sought to examine how the multiple systems layers - student, teacher, school, community and government - interacted and influenced each other. A distinguishing feature of the evaluation included 'feedback loops'; that is, evaluation data was provided to participants as it became available. Evaluation tools included a combination of standardised surveys (with pre- and post-intervention data provided to schools via individualised reports), reflection tools, regular reflection interviews and summative focus groups. Results Data was shared during implementation with project staff, department staff and schools to support continuous improvement at these multiple systems levels. In complex settings, implementation can vary according to context; and the impact of evaluation processes, tools and findings differed across the schools. Interviews and focus groups conducted at the end of the project illustrated which of these methods were instrumental in motivating change and engaging stakeholders at both a school and departmental level and why. Conclusion The evaluation methods were a critical component of the pilot's approach, helping to shape implementation through data feedback loops and reflective practice for ongoing, responsive and continuous improvement. Future health promotion research on complex interventions needs to examine how the evaluation itself is influencing implementation. So what? The pilot has demonstrated that the evaluation, including feedback loops to inform project activity, were an
Contemporary approaches to control system specification and design applied to KAON
International Nuclear Information System (INIS)
Ludgate, G.A.; Osberg, E.A.; Dohan, D.A.
1991-05-01
Large data acquisition and control systems have evolved from early centralized computer systems to become multi-processor, distributed systems. While the complexity of these systems has increased our ability to reliably manage their construction has not kept pace. Structured analysis and real-time structured analysis have been used successfully to specify systems but, from a project management viewpoint, both lead to different classes of problems during implementation and maintenance. The KAON Factory central control system study employed a uniform approach to requirements analysis and architectural design. The methodology was based on well established object-oriented principles and was free of the problems inherent in the older methodologies. The methodology is presently being used to implement two systems at TRIUMF. (Author) 12 refs
A Cointegrated Regime-Switching Model Approach with Jumps Applied to Natural Gas Futures Prices
Directory of Open Access Journals (Sweden)
Daniel Leonhardt
2017-09-01
Full Text Available Energy commodities and their futures naturally show cointegrated price movements. However, there is empirical evidence that the prices of futures with different maturities might have, e.g., different jump behaviours in different market situations. Observing commodity futures over time, there is also evidence for different states of the underlying volatility of the futures. In this paper, we therefore allow for cointegration of the term structure within a multi-factor model, which includes seasonality, as well as joint and individual jumps in the price processes of futures with different maturities. The seasonality in this model is realized via a deterministic function, and the jumps are represented with thinned-out compound Poisson processes. The model also includes a regime-switching approach that is modelled through a Markov chain and extends the class of geometric models. We show how the model can be calibrated to empirical data and give some practical applications.
The DPSIR approach applied to marine eutrophication in LCIA as a learning tool
DEFF Research Database (Denmark)
Cosme, Nuno Miguel Dias; Olsen, Stig Irving
assessment and response design ultimately benefit from spatial differentiation in the results. DPSIR based on LCIA seems a useful tool to improve communication and learning, as it bridges science and management while promoting the basic elements of sustainable development in a practical educational...... eutrophication. The goal is to promote an educational example of environmental impacts assessment through science-based tools to predict the impacts, communicate knowledge and support decisions. The example builds on the (D) high demand for fixation of reactive nitrogen that supports several socio......: environmentally sustainable, technologically feasible, economically viable, socially desirable, legally permissible, and administratively achievable. Specific LCIA indicators may provide preliminary information to support a precautionary approach to act earlier on D-P and contribute to sustainability. Impacts...
Fundamental parameters approach applied to focal construct geometry for X-ray diffraction
International Nuclear Information System (INIS)
Rogers, K.; Evans, P.; Prokopiou, D.; Dicken, A.; Godber, S.; Rogers, J.
2012-01-01
A novel geometry for the acquisition of powder X-ray diffraction data, referred to as focal construct geometry (FCG), is presented. Diffraction data obtained by FCG have been shown to possess significantly enhanced intensity due to the hollow tube beam arrangement utilized. In contrast to conventional diffraction, the detector is translated to collect images along a primary axis and record the location of Bragg maxima. These high intensity condensation foci are unique to FCG and appear due to the convergence of Debye cones at single points on the primary axis. This work focuses on a two dimensional, fundamental parameter's approach to simulate experimental data and subsequently aid with interpretation. This convolution method is shown to favorably reproduce the experimental diffractograms and can also accommodate preferred orientation effects in some circumstances.
A Blockchain Approach Applied to a Teledermatology Platform in the Sardinian Region (Italy
Directory of Open Access Journals (Sweden)
Katiuscia Mannaro
2018-02-01
Full Text Available The use of teledermatology in primary care has been shown to be reliable, offering the possibility of improving access to dermatological care by using telecommunication technologies to connect several medical centers and enable the exchange of information about skin conditions over long distances. This paper describes the main points of a teledermatology project that we have implemented to promote and facilitate the diagnosis of skin diseases and improve the quality of care for rural and remote areas. Moreover, we present a blockchain-based approach which aims to add new functionalities to an innovative teledermatology platform which we developed and tested in the Sardinian Region (Italy. These functionalities include giving the patient complete access to his/her medical records while maintaining security. Finally, the advantages that this new decentralized system can provide for patients and specialists are presented.
The Video Interaction Guidance approach applied to teaching communication skills in dentistry.
Quinn, S; Herron, D; Menzies, R; Scott, L; Black, R; Zhou, Y; Waller, A; Humphris, G; Freeman, R
2016-05-01
To examine dentists' views of a novel video review technique to improve communication skills in complex clinical situations. Dentists (n = 3) participated in a video review known as Video Interaction Guidance to encourage more attuned interactions with their patients (n = 4). Part of this process is to identify where dentists and patients reacted positively and effectively. Each dentist was presented with short segments of video footage taken during an appointment with a patient with intellectual disabilities and communication difficulties. Having observed their interactions with patients, dentists were asked to reflect on their communication strategies with the assistance of a trained VIG specialist. Dentists reflected that their VIG session had been insightful and considered the review process as beneficial to communication skills training in dentistry. They believed that this technique could significantly improve the way dentists interact and communicate with patients. The VIG sessions increased their awareness of the communication strategies they use with their patients and were perceived as neither uncomfortable nor threatening. The VIG session was beneficial in this exploratory investigation because the dentists could identify when their interactions were most effective. Awareness of their non-verbal communication strategies and the need to adopt these behaviours frequently were identified as key benefits of this training approach. One dentist suggested that the video review method was supportive because it was undertaken by a behavioural scientist rather than a professional counterpart. Some evidence supports the VIG approach in this specialist area of communication skills and dental training. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A chaotic quantum-behaved particle swarm approach applied to optimization of heat exchangers
International Nuclear Information System (INIS)
Mariani, Viviana Cocco; Klassen Duck, Anderson Rodrigo; Guerra, Fabio Alessandro; Santos Coelho, Leandro dos; Rao, Ravipudi Venkata
2012-01-01
Particle swarm optimization (PSO) method is a population-based optimization technique of swarm intelligence field in which each solution called “particle” flies around in a multidimensional problem search space. During the flight, every particle adjusts its position according to its own experience, as well as the experience of neighboring particles, using the best position encountered by itself and its neighbors. In this paper, a new quantum particle swarm optimization (QPSO) approach combined with Zaslavskii chaotic map sequences (QPSOZ) to shell and tube heat exchanger optimization is presented based on the minimization from economic view point. The results obtained in this paper for two case studies using the proposed QPSOZ approach, are compared with those obtained by using genetic algorithm, PSO and classical QPSO showing the best performance of QPSOZ. In order to verify the capability of the proposed method, two case studies are also presented showing that significant cost reductions are feasible with respect to traditionally designed exchangers. Referring to the literature test cases, reduction of capital investment up to 20% and 6% for the first and second cases, respectively, were obtained. Therefore, the annual pumping cost decreased markedly 72% and 75%, with an overall decrease of total cost up to 30% and 27%, respectively, for the cases 1 and 2, respectively, showing the improvement potential of the proposed method, QPSOZ. - Highlights: ► Shell and tube heat exchanger is minimized from economic view point. ► A new quantum particle swarm optimization (QPSO) combined with Zaslavskii chaotic map sequences (QPSOZ) is proposed. ► Reduction of capital investment up to 20% and 6% for the first and second cases was obtained. ► Annual pumping cost decreased 72% and 75%, with an overall decrease of total cost up to 30% and 27% using QPSOZ.
Komarov, Yu. A.
2014-10-01
An analysis and some generalizations of approaches to risk assessments are presented. Interconnection between different interpretations of the "risk" notion is shown, and the possibility of applying the fuzzy set theory to risk assessments is demonstrated. A generalized formulation of the risk assessment notion is proposed in applying risk-oriented approaches to the problem of enhancing reliability and safety in nuclear power engineering. The solution of problems using the developed risk-oriented approaches aimed at achieving more reliable and safe operation of NPPs is described. The results of studies aimed at determining the need (advisability) to modernize/replace NPP elements and systems are presented together with the results obtained from elaborating the methodical principles of introducing the repair concept based on the equipment technical state. The possibility of reducing the scope of tests and altering the NPP systems maintenance strategy is substantiated using the risk-oriented approach. A probabilistic model for estimating the validity of boric acid concentration measurements is developed.
Laassiri, M.; Hamzaoui, E.-M.; Cherkaoui El Moursli, R.
2018-02-01
Inside nuclear reactors, gamma-rays emitted from nuclei together with the neutrons introduce unwanted backgrounds in neutron spectra. For this reason, powerful extraction methods are needed to extract useful neutron signal from recorded mixture and thus to obtain clearer neutron flux spectrum. Actually, several techniques have been developed to discriminate between neutrons and gamma-rays in a mixed radiation field. Most of these techniques, tackle using analogue discrimination methods. Others propose to use some organic scintillators to achieve the discrimination task. Recently, systems based on digital signal processors are commercially available to replace the analog systems. As alternative to these systems, we aim in this work to verify the feasibility of using a Nonnegative Tensor Factorization (NTF) to blind extract neutron component from mixture signals recorded at the output of fission chamber (WL-7657). This last have been simulated through the Geant4 linked to Garfield++ using a 252Cf neutron source. To achieve our objective of obtaining the best possible neutron-gamma discrimination, we have applied the two different NTF algorithms, which have been found to be the best methods that allow us to analyse this kind of nuclear data.
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
Mioni, Roberto; Marega, Alessandra; Lo Cicero, Marco; Montanaro, Domenico
2016-11-01
The approach to acid-base chemistry in medicine includes several methods. Currently, the two most popular procedures are derived from Stewart's studies and from the bicarbonate/BE-based classical formulation. Another method, unfortunately little known, follows the Kildeberg theory applied to acid-base titration. By using the data produced by Dana Atchley in 1933, regarding electrolytes and blood gas analysis applied to diabetes, we compared the three aforementioned methods, in order to highlight their strengths and their weaknesses. The results obtained, by reprocessing the data of Atchley, have shown that Kildeberg's approach, unlike the other two methods, is consistent, rational and complete for describing the organ-physiological behavior of the hydrogen ion turnover in human organism. In contrast, the data obtained using the Stewart approach and the bicarbonate-based classical formulation are misleading and fail to specify which organs or systems are involved in causing or maintaining the diabetic acidosis. Stewart's approach, despite being considered 'quantitative', does not propose in any way the concept of 'an amount of acid' and becomes even more confusing, because it is not clear how to distinguish between 'strong' and 'weak' ions. As for Stewart's approach, the classical method makes no distinction between hydrogen ions managed by the intermediate metabolism and hydroxyl ions handled by the kidney, but, at least, it is based on the concept of titration (base-excess) and indirectly defines the concept of 'an amount of acid'. In conclusion, only Kildeberg's approach offers a complete understanding of the causes and remedies against any type of acid-base disturbance.
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
International Nuclear Information System (INIS)
Formosa, Fabien; Badel, Adrien; Lottin, Jacques
2014-01-01
Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications
Mintram, Kate S; Brown, A Ross; Maynard, Samuel K; Thorbek, Pernille; Tyler, Charles R
2018-02-01
Endocrine active chemicals (EACs) are widespread in freshwater environments and both laboratory and field based studies have shown reproductive effects in fish at environmentally relevant exposures. Environmental risk assessment (ERA) seeks to protect wildlife populations and prospective assessments rely on extrapolation from individual-level effects established for laboratory fish species to populations of wild fish using arbitrary safety factors. Population susceptibility to chemical effects, however, depends on exposure risk, physiological susceptibility, and population resilience, each of which can differ widely between fish species. Population models have significant potential to address these shortfalls and to include individual variability relating to life-history traits, demographic and density-dependent vital rates, and behaviors which arise from inter-organism and organism-environment interactions. Confidence in population models has recently resulted in the EU Commission stating that results derived from reliable models may be considered when assessing the relevance of adverse effects of EACs at the population level. This review critically assesses the potential risks posed by EACs for fish populations, considers the ecological factors influencing these risks and explores the benefits and challenges of applying population modeling (including individual-based modeling) in ERA for EACs in fish. We conclude that population modeling offers a way forward for incorporating greater environmental relevance in assessing the risks of EACs for fishes and for identifying key risk factors through sensitivity analysis. Individual-based models (IBMs) allow for the incorporation of physiological and behavioral endpoints relevant to EAC exposure effects, thus capturing both direct and indirect population-level effects.
Directory of Open Access Journals (Sweden)
L. Raiger Iustman
2013-03-01
Full Text Available Industrial Biotechnology and Applied Microbiology is an optional 128h-course for Chemistry and Biology students at the Faculty of Sciences, University of Buenos Aires, Argentina. This course is usually attended by 25 students, working in teams of two. The curriculum, with 8 lab exercises, includes an oil bioremediation practice covering an insight of bioremediation processes: the influence of pollutants on autochthonous microbiota, biodegrader isolation and biosurfactant production for bioavailability understanding. The experimental steps are: (A evaluation of microbial tolerance to pollutants by constructing pristine soil microcosms contaminated with diesel or xylene and (B isolation of degraders and biosurfactant production analysis. To check microbial tolerance, microcosms are incubated during one week at 25-28ºC. Samples are collected at 0, 4 and every 48 h for CFU/g soil testing. An initial decrease of total CFU/g related to toxicity is noticed. At the end of the experiment, a recovery of the CFU number is observed, evidencing enrichment in biodegraders. Some colonies from the CFU counting plates are streaked in M9-agar with diesel as sole carbon source. After a week, isolates are inoculated on M9-Broth supplemented with diesel to induce biosurfactant production. Surface tension and Emulsification Index are measured in culture supernatants to visualize tensioactive effect of bacterial products. Besides the improvement in the good microbiological practices, the students show enthusiasm in different aspects, depending on their own interests. While biology students explore and learn new concepts on solubility, emulsions and bioavailability, chemistry students show curiosity in bacterial behavior and manipulation of microorganisms for environmental benefits.
Ivanova, Iryna V; Tasca, Giorgio A; Proulx, Geneviève; Bissada, Hany
2015-11-01
Interpersonal model has been validated with binge-eating disorder (BED), but it is not yet known if the model applies across a range of eating disorders (ED). The goal of this study was to investigate the validity of the interpersonal model in anorexia nervosa (restricting type; ANR and binge-eating/purge type; ANBP), bulimia nervosa (BN), BED, and eating disorder not otherwise specified (EDNOS). Data from a cross-sectional sample of 1459 treatment-seeking women diagnosed with ANR, ANBP, BN, BED and EDNOS were examined for indirect effects of interpersonal problems on ED psychopathology mediated through negative affect. Findings from structural equation modeling demonstrated the mediating role of negative affect in four of the five diagnostic groups. There were significant, medium to large (.239, .558), indirect effects in the ANR, BN, BED and EDNOS groups but not in the ANBP group. The results of the first reverse model of interpersonal problems as a mediator between negative affect and ED psychopathology were nonsignificant, suggesting the specificity of these hypothesized paths. However, in the second reverse model ED psychopathology was related to interpersonal problems indirectly through negative affect. This is the first study to find support for the interpersonal model of ED in a clinical sample of women with diverse ED diagnoses, though there may be a reciprocal relationship between ED psychopathology and relationship problems through negative affect. Negative affect partially explains the relationship between interpersonal problems and ED psychopathology in women diagnosed with ANR, BN, BED and EDNOS. Interpersonal psychotherapies for ED may be addressing the underlying interpersonal-affective difficulties, thereby reducing ED psychopathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Applying of an Ontology based Modeling Approach to Cultural Heritage Systems
Directory of Open Access Journals (Sweden)
POPOVICI, D.-M.
2011-08-01
Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.
Espath, L. F R; Braun, Alexandre Luis; Awruch, Armando Miguel; Dalcin, Lisandro
2015-01-01
A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.
Santurro, Alessandro; Vullo, Anna Maria; Borro, Marina; Gentile, Giovanna; La Russa, Raffaele; Simmaco, Maurizio; Frati, Paola; Fineschi, Vittorio
2017-01-01
Personalized medicine (PM), included in P5 medicine (Personalized, Predictive, Preventive, Participative and Precision medicine) is an innovative approach to the patient, emerging from the need to tailor and to fit the profile of each individual. PM promises to dramatically impact also on forensic sciences and justice system in ways we are only beginning to understand. The application of omics (genomic, transcriptomics, epigenetics/imprintomics, proteomic and metabolomics) is ever more fundamental in the so called "molecular autopsy". Emerging fields of interest in forensic pathology are represented by diagnosis and detection of predisposing conditions to fatal thromboembolic and hypertensive events, determination of genetic variants related to sudden death, such as congenital long QT syndromes, demonstration of lesions vitality, identification of biological matrices and species diagnosis of a forensic trace on crime scenes without destruction of the DNA. The aim of this paper is to describe the state-of-art in the application of personalized medicine in forensic sciences, to understand the possibilities of integration in routine investigation of these procedures with classical post-mortem studies and to underline the importance of these new updates in medical examiners' armamentarium in determining cause of death or contributing factors to death. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Espath, L. F R
2015-02-03
A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.
International Nuclear Information System (INIS)
Vincent, L.
2012-01-01
The present study deals with the long-term mechanical behaviour and damage of structural materials in nuclear power plants. An experimental way is first followed to study the thermal fatigue of austenitic stainless steels with a focus on the effects of mean stress and bi-axiality. Furthermore, the measurement of displacement fields by Digital Image Correlation techniques has been successfully used to detect early crack initiation during high cycle fatigue tests. A probabilistic model based on the shielding zones surrounding existing cracks is proposed to describe the development of crack networks. A more numeric way is then followed to study the embrittlement consequences of the irradiation hardening of the bainitic steel constitutive of nuclear pressure vessels. A crystalline plasticity law, developed in agreement with lower scale results (Dislocation Dynamics), is introduced in a Finite Element code in order to run simulations on aggregates and obtain the distributions of the maximum principal stress inside a Representative Volume Element. These distributions are then used to improve the classical Local Approach to Fracture which estimates the probability for a microstructural defect to be loaded up to a critical level. (author) [fr
Classification by a neural network approach applied to non destructive testing
International Nuclear Information System (INIS)
Lefevre, M.; Preteux, F.; Lavayssiere, B.
1995-01-01
Radiography is used by EDF for pipe inspection in nuclear power plants in order to detect defects. The radiographs obtained are then digitized in a well-defined protocol. The aim of EDF consists of developing a non destructive testing system for recognizing defects. In this paper, we describe the recognition procedure of areas with defects. We first present the digitization protocol, specifies the poor quality of images under study and propose a procedure to enhance defects. We then examine the problem raised by the choice of good features for classification. After having proved that statistical or standard textural features such as homogeneity, entropy or contrast are not relevant, we develop a geometrical-statistical approach based on the cooperation between signal correlations study and regional extrema analysis. The principle consists of analysing and comparing for areas with defects and without any defect, the evolution of conditional probabilities matrices for increasing neighborhood sizes, the shape of variograms and the location of regional minima. We demonstrate that anisotropy and surface of series of 'comet tails' associated with probability matrices, variograms slope and statistical indices, regional extrema location, are features able to discriminate areas with defects from areas without any. The classification is then realized by a neural network, which structure, properties and learning mechanisms are detailed. Finally we discuss the results. (authors). 21 refs., 5 figs
An applied artificial intelligence approach towards assessing building performance simulation tools
Energy Technology Data Exchange (ETDEWEB)
Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)
2008-07-01
With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)
Gupta, Deepak; Varghese Gupta, Sheeba; Dahan, Arik; Tsume, Yasuhiro; Hilfinger, John; Lee, Kyung-Dall; Amidon, Gordon L
2013-02-04
Poor oral absorption is one of the limiting factors in utilizing the full potential of polar antiviral agents. The neuraminidase target site requires a polar chemical structure for high affinity binding, thus limiting oral efficacy of many high affinity ligands. The aim of this study was to overcome this poor oral absorption barrier, utilizing prodrug to target the apical brush border peptide transporter 1 (PEPT1). Guanidine oseltamivir carboxylate (GOCarb) is a highly active polar antiviral agent with insufficient oral bioavailability (4%) to be an effective therapeutic agent. In this report we utilize a carrier-mediated targeted prodrug approach to improve the oral absorption of GOCarb. Acyloxy(alkyl) ester based amino acid linked prodrugs were synthesized and evaluated as potential substrates of mucosal transporters, e.g., PEPT1. Prodrugs were also evaluated for their chemical and enzymatic stability. PEPT1 transport studies included [(3)H]Gly-Sar uptake inhibition in Caco-2 cells and cellular uptake experiments using HeLa cells overexpressing PEPT1. The intestinal membrane permeabilities of the selected prodrugs and the parent drug were then evaluated for epithelial cell transport across Caco-2 monolayers, and in the in situ rat intestinal jejunal perfusion model. Prodrugs exhibited a pH dependent stability with higher stability at acidic pHs. Significant inhibition of uptake (IC(50) 30-fold increase in affinity compared to GOCarb. The l-valyl prodrug exhibited significant enhancement of uptake in PEPT1/HeLa cells and compared favorably with the well-absorbed valacyclovir. Transepithelial permeability across Caco-2 monolayers showed that these amino acid prodrugs have a 2-5-fold increase in permeability as compared to the parent drug and showed that the l-valyl prodrug (P(app) = 1.7 × 10(-6) cm/s) has the potential to be rapidly transported across the epithelial cell apical membrane. Significantly, only the parent drug (GOCarb) appeared in the basolateral
Advanced In-Service Inspection Approaches Applied to the Phenix Fast Breeder Reactor
International Nuclear Information System (INIS)
Guidez, J.; Martin, L.; Dupraz, R.
2006-01-01
The safety upgrading of the Phenix plant undertaken between 1994 and 1997 involved a vast inspection programme of the reactor, the external storage drum and the secondary sodium circuits in order to meet the requirements of the defence-in-depth safety approach. The three lines of defence were analysed for every safety related component: demonstration of the quality of design and construction, appropriate in-service inspection and controlling the consequences of an accident. The in-service reactor block inspection programme consisted in controlling the core support structures and the high-temperature elements. Despite the fact that limited consideration had been given to inspection constraints during the design stage of the reactor in the 1960's, as compared to more recent reactor projects such as the European Fast Reactor (EFR), all the core support line elements were able to be inspected. The three following main operations are described: Ultrasonic inspection of the upper hangers of the main vessel, using small transducers able to withstand temperatures of 130 deg. C, Inspection of the conical shell supporting the core dia-grid. A specific ultrasonic method and a special implementation technique were used to control the under sodium structure welds, located up to several meters away from the scan surface. Remote inspection of the hot pool structures, particularly the core cover plug after partial sodium drainage of the reactor vessel. Other inspections are also summarized: control of secondary sodium circuit piping, intermediate heat exchangers, primary sodium pumps, steam generator units and external storage drum. The pool type reactor concept, developed in France since the 1960's, presents several favourable safety and operational features. The feedback from the Phenix plant also shows real potential for in-service inspection. The design of future generation IV sodium fast reactors will benefit from the experience acquired from the Phenix plant. (authors)
School Food Environment Promotion Program: Applying the Socio-ecological Approach
Directory of Open Access Journals (Sweden)
Fatemeh Bakhtari Aghdam
2018-01-01
Full Text Available Background Despite of healthy nutrition recommendations have been offered in recent decades, researches show an increasing rate of unhealthy junk food consumption among primary school children. The aim of this study was to investigate the effects of health promotion intervention on the school food buffets and the changes in nutritional behaviors of the students. Materials and Methods In this Quasi-interventional study, eight schools agreed to participate in Tabriz city, Iran. The schools were randomly selected and divided into an intervention and a control group, and a pretest was given to both groups. A four weeks interventional program was conducted in eight randomly selected schools of the city based on the socio-ecological model. A check list was designed for the assessment of food items available at the schools’ buffets, a 60-item semi-quantitative food frequency questionnaire (FFQ was used to assess the rate of food consumption and energy intake. Results evaluation and practice were analyzed using the Wilcoxon, Mann Whitney-U and Chi-square tests. Results The findings revealed reduction in the intervention group between before and after intervention with regard the range of junk food consumption, except for the sweets consumption. The number of junk foods provided in the schools buffets reduced in the intervention group. After the intervention on the intervention group significant decreases were found in the intake of energy, fat and saturated fatty acids compared to the control group (p = 0.00. Conclusion In order to design effective school food environment promotion programs, school healthcare providers should consider multifaceted approaches.
"Teamwork in hospitals": a quasi-experimental study protocol applying a human factors approach.
Ballangrud, Randi; Husebø, Sissel Eikeland; Aase, Karina; Aaberg, Oddveig Reiersdal; Vifladt, Anne; Berg, Geir Vegard; Hall-Lord, Marie Louise
2017-01-01
Effective teamwork and sufficient communication are critical components essential to patient safety in today's specialized and complex healthcare services. Team training is important for an improved efficiency in inter-professional teamwork within hospitals, however the scientific rigor of studies must be strengthen and more research is required to compare studies across samples, settings and countries. The aims of the study are to translate and validate teamwork questionnaires and investigate healthcare personnel's perception of teamwork in hospitals (Part 1). Further to explore the impact of an inter-professional teamwork intervention in a surgical ward on structure, process and outcome (Part 2). To address the aims, a descriptive, and explorative design (Part 1), and a quasi-experimental interventional design will be applied (Part 2). The study will be carried out in five different hospitals (A-E) in three hospital trusts in Norway. Frontline healthcare personnel in Hospitals A and B, from both acute and non-acute departments, will be invited to respond to three Norwegian translated teamwork questionnaires (Part 1). An inter-professional teamwork intervention in line with the TeamSTEPPS recommend Model of Change will be implemented in a surgical ward at Hospital C. All physicians, registered nurses and assistant nurses in the intervention ward and two control wards (Hospitals D and E) will be invited to to survey their perception of teamwork, team decision making, safety culture and attitude towards teamwork before intervention and after six and 12 months. Adult patients admitted to the intervention surgical unit will be invited to survey their perception of quality of care during their hospital stay before intervention and after six and 12 month. Moreover, anonymous patient registry data from local registers and data from patients' medical records will be collected (Part 2). This study will help to understand the impact of an inter-professional teamwork
International Nuclear Information System (INIS)
Deniz, V.C.
1978-01-01
The problem concerned with the correct definition of the homogenized diffusion coefficient of a lattice, and the concurrent problem of whether or not a homogenized diffusion equation can be formally set up, is studied by a space-energy angle dependent treatment for a general lattice cell; using an operator notation which applies to any eigen-problem. It is shown that the diffusion coefficient should represent only leakage effects. A new definition of the diffusion coefficient is given, which combines within itself the individual merits of each of the two definitions of Benoist, and reduces to the 'uncorrected' Benoist coefficient in certain cases. The conditions under which a homogenized diffusion equation can be obtained are discussed. A compatison is made between the approach via a diffusion equation and the approach via the eigen-coefficients of Deniz. Previously defined diffusion coefficients are discussed, and it is shown that the transformed eigen-coefficients proposed by Gelbard and by Larsen are unsuitable as diffusion coefficients, and that the cell-edge normalization of the Bonalumi coefficient is not physically justifiable. (author)
Directory of Open Access Journals (Sweden)
Hans-Georg Schwarz-v. Raumer
2017-03-01
Full Text Available This paper considers scenarios of cultivating energy crops in the German Federal State of Baden-Württemberg to identify potentials and limitations of a sustainable bioenergy production. Trade-offs are analyzed among income and production structure in agriculture, bioenergy crop production, greenhouse gas emissions, and the interests of soil, water and species habitat protection. An integrated modelling approach (IMA was implemented coupling ecological and economic models in a model chain. IMA combines the Economic Farm Emission Model (EFEM; key input: parameter sets on farm production activities, the Environmental Policy Integrated Climate model (EPIC; key input: parameter sets on environmental cropping effects and GIS geo-processing models. EFEM is a supply model that maximizes total gross margins on farm level with simultaneous calculation of greenhouse gas emission from agriculture production. Calculations by EPIC result in estimates for soil erosion by water, nitrate leaching, Soil Organic Carbon and greenhouse gas emissions from soil. GIS routines provide land suitability analyses, scenario settings concerning nature conservation and habitat models for target species and help to enable spatial explicit results. The model chain is used to calculate scenarios representing different intensities of energy crop cultivation. To design scenarios which are detailed and in step to practice, comprehensive data research as well as fact and effect analyses were carried out. The scenarios indicate that, not in general but when considering specific farm types, energy crop share extremely increases if not restricted and leads to an increase in income. If so this leads to significant increase in soil erosion by water, nitrate leaching and greenhouse gas emissions. It has to be expected that an extension of nature conservation leads to an intensification of the remaining grassland and of the arable land, which were not part of nature conservation measures
Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project
Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo
2017-04-01
The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool
International Nuclear Information System (INIS)
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-01-01
necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy
DEFF Research Database (Denmark)
Leuschner, R. G. K.; Robinson, T. P.; Hugas, M.
2010-01-01
Qualified Presumption of Safety (QPS) is a generic risk assessment approach applied by the European Food Safety Authority (EFSA) to notified biological agents aiming at simplifying risk assessments across different scientific Panels and Units. The aim of this review is to outline the implementation...... and value of the QPS assessment for EFSA and to explain its principles such as the unambiguous identity of a taxonomic unit, the body of knowledge including potential safety concerns and how these considerations lead to a list of biological agents recommended for QPS which EFSA keeps updated through...
Determination of aerodynamic sensitivity coefficients in the transonic and supersonic regimes
Elbanna, Hesham M.; Carlson, Leland A.
1989-01-01
The quasi-analytical approach is developed to compute airfoil aerodynamic sensitivity coefficients in the transonic and supersonic flight regimes. Initial investigation verifies the feasibility of this approach as applied to the transonic small perturbation residual expression. Results are compared to those obtained by the direct (finite difference) approach and both methods are evaluated to determine their computational accuracies and efficiencies. The quasi-analytical approach is shown to be superior and worth further investigation.
International Nuclear Information System (INIS)
Montenegro, E.C.; Pinho, A.G. de
1982-01-01
The deflection and the retardation of a slow bare heavy particle by the repulsive Coulomb field of the target nucleus are known to modify the ionization cross sections of inner shells. It is shown how to calculate these effects in the magnetic substates of the 2p-subshell in the frame of the impact parameter picture. These corrections are essential to understand the energy dependence of the anisotropy coefficient of X-ray emitted in transitions filling an L 3 -subshell vacancy produced by massive particle bombardment. (Author) [pt
Directory of Open Access Journals (Sweden)
Salvatore Martino
2018-02-01
Full Text Available The PARSIFAL (Probabilistic Approach to pRovide Scenarios of earthquake-Induced slope FAiLures approach was applied in the basin of Alcoy (Alicante, South Spain, to provide a comprehensive scenario of earthquake-induced landslides. The basin of Alcoy is well known for several historical landslides, mainly represented by earth-slides, that involve urban settlement as well as infrastructures (i.e., roads, bridges. The PARSIFAL overcomes several limits existing in other approaches, allowing the concomitant analyses of: (i first-time landslides (due to both rock-slope failures and shallow earth-slides and reactivations of existing landslides; (ii slope stability analyses of different failure mechanisms; (iii comprehensive mapping of earthquake-induced landslide scenarios in terms of exceedance probability of critical threshold values of co-seismic displacements. Geotechnical data were used to constrain the slope stability analysis, while specific field surveys were carried out to measure jointing and strength conditions of rock masses and to inventory already existing landslides. GIS-based susceptibility analyses were performed to assess the proneness to shallow earth-slides as well as to verify kinematic compatibility to planar or wedge rock-slides and to topples. The experienced application of PARSIFAL to the Alcoy basin: (i confirms the suitability of the approach at a municipality scale, (ii outputs the main role of saturation in conditioning slope instabilities in this case study, (iii demonstrates the reliability of the obtained results respect to the historical data.
Clustering Coefficients for Correlation Networks.
Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu
2018-01-01
Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly
Clustering Coefficients for Correlation Networks
Directory of Open Access Journals (Sweden)
Naoki Masuda
2018-03-01
Full Text Available Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients
Clustering Coefficients for Correlation Networks
Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu
2018-01-01
Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly
New definition of the cell diffusion coefficient
International Nuclear Information System (INIS)
Koehler, P.
1975-01-01
As was shown in a recent work by Gelbard, the usually applied Benoist definition of the cell diffusion coefficient gives two different values if two different definitions of the cell are made. A new definition is proposed that preserves the neutron balance for the homogenized lattice and that is independent of the cell definition. The resulting diffusion coefficient is identical with the main term of Benoist's diffusion coefficient
Haslam, S Alexander
2014-03-01
Social identity research was pioneered as a distinctive theoretical approach to the analysis of intergroup relations but over the last two decades it has increasingly been used to shed light on applied issues. One early application of insights from social identity and self-categorization theories was to the organizational domain (with a particular focus on leadership), but more recently there has been a surge of interest in applications to the realm of health and clinical topics. This article charts the development of this Applied Social Identity Approach, and abstracts five core lessons from the research that has taken this forward. (1) Groups and social identities matter because they have a critical role to play in organizational and health outcomes. (2) Self-categorizations matter because it is people's self-understandings in a given context that shape their psychology and behaviour. (3) The power of groups is unlocked by working with social identities not across or against them. (4) Social identities need to be made to matter in deed not just in word. (5) Psychological intervention is always political because it always involves some form of social identity management. Programmes that seek to incorporate these principles are reviewed and important challenges and opportunities for the future are identified. © 2014 The British Psychological Society.
Sets of Fourier coefficients using numerical quadrature
International Nuclear Information System (INIS)
Lyness, J. N.
2001-01-01
One approach to the calculation of Fourier trigonometric coefficients f(r) of a given function f(x) is to apply the trapezoidal quadrature rule to the integral representation f(r)=(line i ntegral)(sub 0)(sup 1) f(x)e(sup -2(pi)irx)dx. Some of the difficulties in this approach are discussed. A possible way of overcoming many of these is by means of a subtraction function. Thus, one sets f(x)= h(sub p-1)(x)+ g(sub p)(x), where h(sub -1)(x) is an algebraic polynomial of degree p-1, specified in such a way that the Fourier series of g(sub p)(x) converges more rapidly than that of f(x). To obtain the Fourier coefficients of f(x), one uses an analytic expression for those of h(sub p-1)(x) and numerical quadrature to approximately those of g(sub p)(x)
Ianni, Elena; Geneletti, Davide
2010-11-01
This paper proposes a method to select forest restoration priority areas consistently with the key principles of the Ecosystem Approach (EA) and the Forest Landscape Restoration (FLR) framework. The methodology is based on the principles shared by the two approaches: acting at ecosystem scale, involving stakeholders, and evaluating alternatives. It proposes the involvement of social actors which have a stake in forest management through multicriteria analysis sessions aimed at identifying the most suitable forest restoration intervention. The method was applied to a study area in the native forests of Northern Argentina (the Yungas). Stakeholders were asked to identify alternative restoration actions, i.e. potential areas implementing FLR. Ten alternative fincas—estates derived from the Spanish land tenure system—differing in relation to ownership, management, land use, land tenure, and size were evaluated. Twenty criteria were selected and classified into four groups: biophysical, social, economic and political. Finca Ledesma was the closest to the economic, social, environmental and political goals, according to the values and views of the actors involved in the decision. This study represented the first attempt to apply EA principles to forest restoration at landscape scale in the Yungas region. The benefits obtained by the application of the method were twofold: on one hand, researchers and local actors were forced to conceive the Yungas as a complex net of rights rather than as a sum of personal interests. On the other hand, the participatory multicriteria approach provided a structured process for collective decision-making in an area where it has never been implemented.
Thayer, Erin K; Rathkey, Daniel; Miller, Marissa Fuqua; Palmer, Ryan; Mejicano, George C; Pusic, Martin; Kalet, Adina; Gillespie, Colleen; Carney, Patricia A
2016-01-01
Medical educators and educational researchers continue to improve their processes for managing medical student and program evaluation data using sound ethical principles. This is becoming even more important as curricular innovations are occurring across undergraduate and graduate medical education. Dissemination of findings from this work is critical, and peer-reviewed journals often require an institutional review board (IRB) determination. IRB data repositories, originally designed for the longitudinal study of biological specimens, can be applied to medical education research. The benefits of such an approach include obtaining expedited review for multiple related studies within a single IRB application and allowing for more flexibility when conducting complex longitudinal studies involving large datasets from multiple data sources and/or institutions. In this paper, we inform educators and educational researchers on our analysis of the use of the IRB data repository approach to manage ethical considerations as part of best practices for amassing, pooling, and sharing data for educational research, evaluation, and improvement purposes. Fostering multi-institutional studies while following sound ethical principles in the study of medical education is needed, and the IRB data repository approach has many benefits, especially for longitudinal assessment of complex multi-site data.
Carona, Carlos; Silva, Neuza; Moreira, Helena
2015-02-01
Research on the quality of life (QL) of children/adolescents with psychological disorders has flourished over the last few decades. Given the developmental challenges of QL measurements in pediatric populations, the aim of this study was to ascertain the extent to which a developmental approach to QL assessment has been applied to pedopsychiatric QL research. A systematic literature search was conducted in three electronic databases (PubMed, PsycINFO, SocINDEX) from 1994 to May 2014. Quantitative studies were included if they assessed the self- or proxy-reported QL of children/adolescents with a psychological disorder. Data were extracted for study design, participants, QL instruments and informants, and statistical approach to age-related specificities. The systematic review revealed widespread utilization of developmentally appropriate QL instruments but less frequent use of both self and proxy reports and an inconsistent approach to age group specificities. Methodological guidelines are discussed to improve the developmental validity of QL research for children/adolescents with mental disorders.
Nondestructive hall coefficient measurements using ACPD techniques
Velicheti, Dheeraj; Nagy, Peter B.; Hassan, Waled
2018-04-01
Hall coefficient measurements offer great opportunities as well as major challenges for nondestructive materials characterization. The Hall effect is produced by the magnetic Lorentz force acting on moving charge carriers in the presence of an applied magnetic field. The magnetic perturbation gives rise to a Hall current that is normal to the conduction current but does not directly perturb the electric potential distribution. Therefore, Hall coefficient measurements usually exploit the so-called transverse galvanomagnetic potential drop effect that arises when the Hall current is intercepted by the boundaries of the specimen and thereby produce a measurable potential drop. In contrast, no Hall potential is produced in a large plate in the presence of a uniform normal field at quasi-static low frequencies. In other words, conventional Hall coefficient measurements are inherently destructive since they require cutting the material under tests. This study investigated the feasibility of using alternating current potential drop (ACPD) techniques for nondestructive Hall coefficient measurements in plates. Specifically, the directional four-point square-electrode configuration is investigated with superimposed external magnetic field. Two methods are suggested to make Hall coefficient measurements in large plates without destructive machining. At low frequencies, constraining the bias magnetic field can replace constraining the dimensions of the specimen, which is inherently destructive. For example, when a cylindrical permanent magnet is used to provide the bias magnetic field, the peak Hall voltage is produced when the diameter of the magnet is equal to the diagonal of the square ACPD probe. Although this method is less effective than cutting the specimen to a finite size, the loss of sensitivity is less than one order of magnitude even at very low frequencies. In contrast, at sufficiently high inspection frequencies the magnetic field of the Hall current induces a
Diffusion Coefficients of Several Aqueous Alkanolamine Solutions
Snijder, Erwin D.; Riele, Marcel J.M. te; Versteeg, Geert F.; Swaaij, W.P.M. van
1993-01-01
The Taylor dispersion technique was applied for the determination of diffusion coefficients of various systems. Experiments with the system KCl in water showed that the experimental setup provides accurate data. For the alkanolamines monoethanolamine (MEA), diethanolamine (DEA), methyldiethanolamine
Saint, Victoria; Floranita, Rustini; Koemara Sakti, Gita Maya; Pambudi, Imran; Hermawan, Lukas; Villar, Eugenio; Magar, Veronica
2018-01-01
ABSTRACT The World Health Organization’s Innov8 Approach for Reviewing National Health Programmes to Leave No One Behind is an eight-step process that supports the operationalization of the Sustainable Development Goals’ commitment to ‘leave no one behind’. In 2014–2015, Innov8 was adapted and applied in Indonesia to review how the national neonatal and maternal health action plans could become more equity-oriented, rights-based and gender-responsive, and better address critical social determinants of health. The process was led by the Indonesian Ministry of Health, with the support of WHO. It involved a wide range of actors and aligned with/fed into the drafting of the maternal newborn health action plan and the implementation planning of the newborn action plan. Key activities included a sensitization meeting, diagnostic checklist, review workshop and in-country work by the review teams. This ‘methods forum’ article describes this adaptation and application process, the outcomes and lessons learnt. In conjunction with other sources, Innov8 findings and recommendations informed national and sub-national maternal and neonatal action plans and programming to strengthen a ‘leave no one behind’ approach. As follow-up during 2015–2017, components of the Innov8 methodology were integrated into district-level planning processes for maternal and newborn health, and Innov8 helped generate demand for health inequality monitoring and its use in planning. In Indonesia, Innov8 enhanced national capacity for equity-oriented, rights-based and gender-responsive approaches and addressing critical social determinants of health. Adaptation for the national planning context (e.g. decentralized structure) and linking with health inequality monitoring capacity building were important lessons learnt. The pilot of Innov8 in Indonesia suggests that this approach can help operationalize the SDGs’ commitment to leave no one behind, in particular in relation to
Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria
2017-08-01
Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.
International Nuclear Information System (INIS)
Schipper, P.E.; Martire, B.
1985-01-01
The exciton model is applied quantitatively to a description of the excited states of representative members of the helium isoelectronic series; viz. H + , He, Li + , Be 2+ and Ne 8+ . The energies of the eight lowest excited states are in good agreement with experiment, for a relatively small (1s-4p) hydrogenic basis; the ground state is obtained with slightly less precision. Response properties including oscillator strengths, polarizabilites and dispersion interaction coefficients are also calculated. The method leads to particularly simple interpretations of the wave functions and the energies
Senten, Cindy; de Mazière, Martine; Vanhaelewyn, Gauthier; Vigouroux, Corinne; Delmas, Robert
2010-05-01
The retrieval of information about the vertical distribution of an atmospheric absorber from high spectral resolution ground-based Fourier Transform infrared (FTIR) solar absorption spectra is an important issue in remote sensing. A frequently used technique at present is the optimal estimation method. This work introduces the application of an alternative method, namely the information operator approach (Doicu et al., 2007; Hoogen et al., 1999), for extracting the available information from such FTIR measurements. This approach has been implemented within the well-known retrieval code SFIT2, by adapting the optimal estimation method such as to take into account only the significant contributions to the solution. In particular, we demonstrate the feasibility of the method when applied to ground-based FTIR spectra taken at the southern (sub)tropical site Ile de La Réunion (21° S, 55° E) in 2007. A thorough comparison has been made between the retrieval results obtained with the original optimal estimation method and the ones obtained with the information operator approach, regarding profile and column stability, information content and corresponding full error budget evaluation. This has been done for the target species ozone (O3), methane (CH4), nitrous oxide (N2O), and carbon monoxide (CO). It is shown that the information operator approach performs well and is capable of achieving the same accuracy as optimal estimation, with a gain of stability and with the additional advantage of being less sensitive to the choice of a priori information as well as to the actual signal-to-noise ratio. Keywords: ground-based FTIR, solar absorption spectra, greenhouse gases, information operator approach References Doicu, A., Hilgers, S., von Bargen, A., Rozanov, A., Eichmann, K.-U., von Savigny, C., and Burrows, J.P.: Information operator approach and iterative regularization methods for atmospheric remote sensing, J. Quant. Spectrosc. Radiat. Transfer, 103, 340-350, 2007
Hadri-Hamida, A.; Allag, A.; Hammoudi, M. Y.; Mimoune, S. M.; Zerouali, S.; Ayad, M. Y.; Becherif, M.; Miliani, E.; Miraoui, A.
2009-04-01
This paper presents a new control strategy for a three phase PWM converter, which consists of applying an adaptive nonlinear control. The input-output feedback linearization approach is based on the exact cancellation of the nonlinearity, for this reason, this technique is not efficient, because system parameters can vary. First a nonlinear system modelling is derived with state variables of the input current and the output voltage by using power balance of the input and output, the nonlinear adaptive backstepping control can compensate the nonlinearities in the nominal system and the uncertainties. Simulation results are obtained using Matlab/Simulink. These results show how the adaptive backstepping law updates the system parameters and provide an efficient control design both for tracking and regulation in order to improve the power factor.
Ruch, P; Baud, R; Geissbühler, A
2002-12-04
Unlike journal corpora, which are supposed to be carefully reviewed before being published, the quality of documents in a patient record are often corrupted by mispelled words and conventional graphies or abbreviations. After a survey of the domain, the paper focuses on evaluating the effect of such corruption on an information retrieval (IR) engine. The IR system uses a classical bag of words approach, with stems as representation items and term frequency-inverse document frequency (tf-idf) as weighting schema; we pay special attention to the normalization factor. First results shows that even low corruption levels (3%) do affect retrieval effectiveness (4-7%), whereas higher corruption levels can affect retrieval effectiveness by 25%. Then, we show that the use of an improved automatic spelling correction system, applied on the corrupted collection, can almost restore the retrieval effectiveness of the engine.
Directory of Open Access Journals (Sweden)
Samuel Gendebien
2014-06-01
Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.
Peeters, Michael J; Vaidya, Varun A
2016-06-25
Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education's (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students' professional development. Methods. This investigation used mixed methods with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)-an assessment of pharmacists' professional development. Results. Qualitatively, students' development reflections described growth through this course series. Quantitatively, the 2015 PharmD class's DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. Applying an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.
SYMPOSIUM REPORT: An Evidence-Based Approach to IBS and CIC: Applying New Advances to Daily Practice
Chey, William D.
2017-01-01
Many nonpharmacologic and pharmacologic therapies are available to manage irritable bowel syndrome (IBS) and chronic idiopathic constipation (CIC). The American College of Gastroenterology (ACG) regularly publishes reviews on IBS and CIC therapies. The most recent of these reviews was published by the ACG Task Force on the Management of Functional Bowel Disorders in 2014. The key objective of this review was to evaluate the efficacy of therapies for IBS or CIC compared with placebo or no treatment in randomized controlled trials. Evidence-based approaches to managing diarrhea-predominant IBS include dietary measures, such as a diet low in gluten and fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs); loperamide; antispasmodics; peppermint oil; probiotics; tricyclic antidepressants; alosetron; eluxadoline, and rifaximin. Evidence-based approaches to managing constipation-predominant IBS and CIC include fiber, stimulant laxatives, polyethylene glycol, selective serotonin reuptake inhibitors, lubiprostone, and guanylate cyclase agonists. With the growing evidence base for IBS and CIC therapies, it has become increasingly important for clinicians to assess the quality of evidence and understand how to apply it to the care of individual patients. PMID:28729815
Delfabbro, Paul; King, Daniel
2015-03-01
Many similarities have been drawn between the activities of gambling and video-gaming. Both are repetitive activities with intermittent reinforcement, decision-making opportunities, and elements of risk-taking. As a result, it might be tempting to believe that cognitive strategies that are used to treat problem gambling might also be applied to problematic video gaming. In this paper, we argue that many cognitive approaches to gambling that typically involve a focus on erroneous beliefs about probabilities and randomness are not readily applicable to video gaming. Instead, we encourage a focus on other clusters of cognitions that relate to: (a) the salience and over-valuing of gaming rewards, experiences, and identities, (b) maladaptive and inflexible rules about behaviour, (c) the use of video-gaming to maintain self-esteem, and (d) video-gaming for social status and recognition. This theoretical discussion is advanced as a starting point for the development of more refined cognitive treatment approaches for problematic video gaming.
Directory of Open Access Journals (Sweden)
Shane Stimpson
2017-09-01
Full Text Available An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1 a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications Progression Problem 2a and (2 a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Given these performance benefits, these approaches have been adopted as the default in MPACT.
International Nuclear Information System (INIS)
Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; Clarno, Kevin T.
2017-01-01
An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.
APPLIED ORGANIZATION OF CONSTRUCTION
Directory of Open Access Journals (Sweden)
Kievskiy Leonid Vladimirovich
2017-03-01
Full Text Available Applied disciplines in the sphere of construction which are engaged in the solution of vital macroeconomical problems (the general trend of development of these disciplines is the expansion of problematics and mutual integration are considered. Construction organization characteristic at the present stage as a systems engineering discipline covering the investment process of creation of real estate items, is given. The main source of current research trends for applied sciences (socio-economic development forecasts, regional and local programs is determined. Interpenetration and integration of various fields of knowledge exemplified by the current interindustry problem of blocks renovation organization of existing development, is demonstrated. Mathematical model of wave construction (for the period of deployment is proposed. Nature of dependence of the total duration of renovation on the limit of annual input and coefficient of renovation, is established. Overall structure of the Moscow region housing market is presented, and approaches to definition of effective demand are proposed.
Climer, Sharlee; Yang, Wei; de las Fuentes, Lisa; Dávila-Román, Victor G; Gu, C Charles
2014-11-01
Complex diseases are often associated with sets of multiple interacting genetic factors and possibly with unique sets of the genetic factors in different groups of individuals (genetic heterogeneity). We introduce a novel concept of custom correlation coefficient (CCC) between single nucleotide polymorphisms (SNPs) that address genetic heterogeneity by measuring subset correlations autonomously. It is used to develop a 3-step process to identify candidate multi-SNP patterns: (1) pairwise (SNP-SNP) correlations are computed using CCC; (2) clusters of so-correlated SNPs identified; and (3) frequencies of these clusters in disease cases and controls compared to identify disease-associated multi-SNP patterns. This method identified 42 candidate multi-SNP associations with hypertensive heart disease (HHD), among which one cluster of 22 SNPs (six genes) included 13 in SLC8A1 (aka NCX1, an essential component of cardiac excitation-contraction coupling) and another of 32 SNPs had 29 from a different segment of SLC8A1. While allele frequencies show little difference between cases and controls, the cluster of 22 associated alleles were found in 20% of controls but no cases and the other in 3% of controls but 20% of cases. These suggest that both protective and risk effects on HHD could be exerted by combinations of variants in different regions of SLC8A1, modified by variants from other genes. The results demonstrate that this new correlation metric identifies disease-associated multi-SNP patterns overlooked by commonly used correlation measures. Furthermore, computation time using CCC is a small fraction of that required by other methods, thereby enabling the analyses of large GWAS datasets. © 2014 WILEY PERIODICALS, INC.
Directory of Open Access Journals (Sweden)
Evangelia Triantafyllou
2016-05-01
Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance
Kobayashi, Hideyuki; Takemura, Yukie; Kanda, Katsuya
2011-09-01
Nursing is a labour-intensive field, and an extensive amount of latent information exists to aid in evaluating the quality of nursing service, with patients' experiences, the primary focus of such evaluations. To effect further improvement in nursing as well as medical care, Donabedian's structure-process-outcome approach has been applied. To classify and confirm patients' specific experiences with regard to nursing service based on Donabedian's structure-process-outcomes model for improving the quality of nursing care. Items were compiled from existing scales and assigned to structure, process or outcomes in Donabedian's model through discussion among expert nurses and pilot data collection. With regard to comfort, surroundings were classified as structure (e.g. accessibility to nurses, disturbance); with regard to patient-practitioner interaction, patient participation was classified as a process (e.g. expertise and skill, patient decision-making); and with regard to changes in patients, satisfaction was classified as an outcome (e.g. information support, overall satisfaction). Patient inquiry was carried out using the finalized questionnaire at general wards in Japanese hospitals in 2005-2006. Reliability and validity were tested using psychometric methods. Data from 1,810 patients (mean age: 59.7 years; mean length of stay: 23.7 days) were analysed. Internal consistency reliability was supported (α = 0.69-0.96), with factor analysis items of structure aggregated to one factor and overall satisfaction under outcome aggregated to one. The remaining items of outcome and process were distributed together in two factors. Inter-scale correlation (r = 0.442-0.807) supported the construct validity of each structure-process-outcome approach. All structure items were represented as negative-worded examples, as they dealt with basic conditions under Japanese universal health care system, and were regarded as representative related to concepts of dissatisfaction and no
Transfer coefficients in ultracold strongly coupled plasma
Bobrov, A. A.; Vorob'ev, V. S.; Zelener, B. V.
2018-03-01
We use both analytical and molecular dynamic methods for electron transfer coefficients in an ultracold plasma when its temperature is small and the coupling parameter characterizing the interaction of electrons and ions exceeds unity. For these conditions, we use the approach of nearest neighbor to determine the average electron (ion) diffusion coefficient and to calculate other electron transfer coefficients (viscosity and electrical and thermal conductivities). Molecular dynamics simulations produce electronic and ionic diffusion coefficients, confirming the reliability of these results. The results compare favorably with experimental and numerical data from earlier studies.
Comparing linear probability model coefficients across groups
DEFF Research Database (Denmark)
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Peterson, Kathryn M; Piazza, Cathleen C; Volkert, Valerie M
2016-09-01
Treatments of pediatric feeding disorders based on applied behavior analysis (ABA) have the most empirical support in the research literature (Volkert & Piazza, 2012); however, professionals often recommend, and caregivers often use, treatments that have limited empirical support. In the current investigation, we compared a modified sequential oral sensory approach (M-SOS; Benson, Parke, Gannon, & Muñoz, 2013) to an ABA approach for the treatment of the food selectivity of 6 children with autism. We randomly assigned 3 children to ABA and 3 children to M-SOS and compared the effects of treatment in a multiple baseline design across novel, healthy target foods. We used a multielement design to assess treatment generalization. Consumption of target foods increased for children who received ABA, but not for children who received M-SOS. We subsequently implemented ABA with the children for whom M-SOS was not effective and observed a potential treatment generalization effect during ABA when M-SOS preceded ABA. © 2016 Society for the Experimental Analysis of Behavior.
Moncayo, Roy; Moncayo, Helga; Ulmer, Hanno; Kainz, Hartmann
2004-08-01
To investigate pathogenetic mechanisms related to the lacrimal and lymphatic glands in patients with thyroid-associated orbitopathy (TAO), and the potential of applied kinesiology diagnosis and homeopathic therapeutic measures. Prospective. Thyroid outpatient unit and a specialized center for complementary medicine (WOMED, Innsbruck; R.M. and H.M.). Thirty-two (32) patients with TAO, 23 with a long-standing disease, and 9 showing discrete initial changes. All patients were euthyroid at the time of the investigation. Clinical investigation was done, using applied kinesiology methods. Departing from normal reacting muscles, both target organs as well as therapeutic measures were tested. Affected organs will produce a therapy localization (TL) that turns a normal muscle tone weak. Using the same approach, specific counteracting therapies (i.e., tonsillitis nosode and lymph mobilizing agents) were tested. Change of lid swelling, of ocular movement discomfort, ocular lock, tonsil reactivity and Traditional Chinese Medicine criteria including tenderness of San Yin Jiao (SP6) and tongue diagnosis were recorded in a graded fashion. Positive TL reactions were found in the submandibular tonsillar structures, the pharyngeal tonsils, the San Yin Jiao point, the lacrimal gland, and with the functional ocular lock test. Both Lymphdiaral (Pascoe, Giessen, Germany) and the homeopathic preparation chronic tonsillitis nosode at a C3 potency (Spagyra, Grödig, Austria) counteracted these changes. Both agents were used therapeutically over 3-6 months, after which all relevant parameters showed improvement. Our study demonstrates the involvement of lymphatic structures and flow in the pathogenesis of TAO. The tenderness of the San Yin Jiao point correlates to the above mentioned changes and should be included in the clinical evaluation of these patients.
Directory of Open Access Journals (Sweden)
Namhee Kim
Full Text Available Group-wise analyses of DTI in mTBI have demonstrated evidence of traumatic axonal injury (TAI, associated with adverse clinical outcomes. Although mTBI is likely to have a unique spatial pattern in each patient, group analyses implicitly assume that location of injury will be the same across patients. The purpose of this study was to optimize and validate a procedure for analysis of DTI images acquired in individual patients, which could detect inter-individual differences and be applied in the clinical setting, where patients must be assessed as individuals.After informed consent and in compliance with HIPAA, 34 mTBI patients and 42 normal subjects underwent 3.0 Tesla DTI. Four voxelwise assessment methods (standard Z-score, "one vs. many" t-test, Family-Wise Error Rate control using pseudo t-distribution, EZ-MAP for use in individual patients, were applied to each patient's fractional anisotropy (FA maps and tested for its ability to discriminate patients from controls. Receiver Operating Characteristic (ROC analyses were used to define optimal thresholds (voxel-level significance and spatial extent for reliable and robust detection of mTBI pathology.ROC analyses showed EZ-MAP (specificity 71%, sensitivity 71%, "one vs. many" t-test and standard Z-score (sensitivity 65%, specificity 76% for both methods resulted in a significant area under the curve (AUC score for discriminating mTBI patients from controls in terms of the total number of abnormal white matter voxels detected while the FWER test was not significant. EZ-MAP is demonstrated to be robust to assumptions of Gaussian behavior and may serve as an alternative to methods that require strict Gaussian assumptions.EZ-MAP provides a robust approach for delineation of regional abnormal anisotropy in individual mTBI patients.
Papež, Václav; Mouček, Roman
2017-01-01
The purpose of this study is to investigate the feasibility of applying openEHR (an archetype-based approach for electronic health records representation) to modeling data stored in EEGBase, a portal for experimental electroencephalography/event-related potential (EEG/ERP) data management. The study evaluates re-usage of existing openEHR archetypes and proposes a set of new archetypes together with the openEHR templates covering the domain. The main goals of the study are to (i) link existing EEGBase data/metadata and openEHR archetype structures and (ii) propose a new openEHR archetype set describing the EEG/ERP domain since this set of archetypes currently does not exist in public repositories. The main methodology is based on the determination of the concepts obtained from EEGBase experimental data and metadata that are expressible structurally by the openEHR reference model and semantically by openEHR archetypes. In addition, templates as the third openEHR resource allow us to define constraints over archetypes. Clinical Knowledge Manager (CKM), a public openEHR archetype repository, was searched for the archetypes matching the determined concepts. According to the search results, the archetypes already existing in CKM were applied and the archetypes not existing in the CKM were newly developed. openEHR archetypes support linkage to external terminologies. To increase semantic interoperability of the new archetypes, binding with the existing odML electrophysiological terminology was assured. Further, to increase structural interoperability, also other current solutions besides EEGBase were considered during the development phase. Finally, a set of templates using the selected archetypes was created to meet EEGBase requirements. A set of eleven archetypes that encompassed the domain of experimental EEG/ERP measurements were identified. Of these, six were reused without changes, one was extended, and four were newly created. All archetypes were arranged in the
International Nuclear Information System (INIS)
Bachet, Martin; Jauberty, Loic; De Windt, Laurent; Dieuleveult, Caroline de; Tevissen, Etienne
2014-01-01
Experiments performed under chemical and flow conditions representative of pressurized water reactors (PWR) primary fluid purification by ion exchange resins (Amberlite IRN9882) are modeled with the OPTIPUR code, considering 1D reactive transport in the mixed-bed column with convective/dispersive transport between beads and electro-diffusive transport within the boundary film around the beads. The effectiveness of the purification in these dilute conditions is highly related to film mass transfer restrictions, which are accounted for by adjustment of a common mass transfer coefficient (MTC) on the experimental initial leakage or modeling of species diffusion through the bead film by the Nernst-Planck equation. A detailed analysis of the modeling against experimental data shows that the Nernst-Planck approach with no adjustable parameters performs as well as, or better than, the MTC approach, particularly to simulate the chromatographic elution of silver by nickel and the subsequent enrichment of the solution in the former metal. (authors)
Benmarhnia, Tarik; Grenier, Patrick; Brand, Allan; Fournier, Michel; Deguen, Séverine; Smargiassi, Audrey
2015-09-22
We propose a novel approach to examine vulnerability in the relationship between heat and years of life lost and apply to neighborhood social disparities in Montreal and Paris. We used historical data from the summers of 1990 through 2007 for Montreal and from 2004 through 2009 for Paris to estimate daily years of life lost social disparities (DYLLD), summarizing social inequalities across groups. We used Generalized Linear Models to separately estimate relative risks (RR) for DYLLD in association with daily mean temperatures in both cities. We used 30 climate scenarios of daily mean temperature to estimate future temperature distributions (2021-2050). We performed random effect meta-analyses to assess the impact of climate change by climate scenario for each city and compared the impact of climate change for the two cities using a meta-regression analysis. We show that an increase in ambient temperature leads to an increase in social disparities in daily years of life lost. The impact of climate change on DYLLD attributable to temperature was of 2.06 (95% CI: 1.90, 2.25) in Montreal and 1.77 (95% CI: 1.61, 1.94) in Paris. The city explained a difference of 0.31 (95% CI: 0.14, 0.49) on the impact of climate change. We propose a new analytical approach for estimating vulnerability in the relationship between heat and health. Our results suggest that in Paris and Montreal, health disparities related to heat impacts exist today and will increase in the future.
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
Adaptive Finite Element Methods for Elliptic Problems with Discontinuous Coefficients
Bonito, Andrea; DeVore, Ronald A.; Nochetto, Ricardo H.
2013-01-01
Elliptic PDEs with discontinuous diffusion coefficients occur in application domains such as diffusions through porous media, electromagnetic field propagation on heterogeneous media, and diffusion processes on rough surfaces. The standard approach to numerically treating such problems using finite element methods is to assume that the discontinuities lie on the boundaries of the cells in the initial triangulation. However, this does not match applications where discontinuities occur on curves, surfaces, or manifolds, and could even be unknown beforehand. One of the obstacles to treating such discontinuity problems is that the usual perturbation theory for elliptic PDEs assumes bounds for the distortion of the coefficients in the L∞ norm and this in turn requires that the discontinuities are matched exactly when the coefficients are approximated. We present a new approach based on distortion of the coefficients in an Lq norm with q < ∞ which therefore does not require the exact matching of the discontinuities. We then use this new distortion theory to formulate new adaptive finite element methods (AFEMs) for such discontinuity problems. We show that such AFEMs are optimal in the sense of distortion versus number of computations, and report insightful numerical results supporting our analysis. © 2013 Societ y for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Eldon Glen Caldwell Marin
2015-01-01
Full Text Available The Markov Chains Model was proposed to analyze stochastic events when recursive cycles occur; for example, when rework in a continuous flow production affects the overall performance. Typically, the analysis of rework and scrap is done through a wasted material cost perspective and not from the perspective of waste capacity that reduces throughput and economic value added (EVA. Also, we can not find many cases of this application in agro-industrial production in Latin America, given the complexity of the calculations and the need for robust applications. This scientific work presents the results of a quasi-experimental research approach in order to explain how to apply DOE methods and Markov analysis in a rice production process located in Central America, evaluating the global effects of a single reduction in rework and scrap in a part of the whole line. The results show that in this case it is possible to evaluate benefits from Global Throughput and EVA perspective and not only from the saving costs perspective, finding a relationship between operational indicators and corporate performance. However, it was found that it is necessary to analyze the markov chains configuration with many rework points, also it is still relevant to take into account the effects on takt time and not only scrap´s costs.
Lim, Hayoung A; Draper, Ellary
2011-01-01
This study compared a common form of Applied Behavior Analysis Verbal Behavior (ABA VB) approach and music incorporated with ABA VB method as part of developmental speech-language training in the speech production of children with Autism Spectrum Disorders (ASD). This study explored how the perception of musical patterns incorporated in ABA VB operants impacted the production of speech in children with ASD. Participants were 22 children with ASD, age range 3 to 5 years, who were verbal or pre verbal with presence of immediate echolalia. They were randomly assigned a set of target words for each of the 3 training conditions: (a) music incorporated ABA VB, (b) speech (ABA VB) and (c) no-training. Results showed both music and speech trainings were effective for production of the four ABA verbal operants; however, the difference between music and speech training was not statistically different. Results also indicated that music incorporated ABA VB training was most effective in echoic production, and speech training was most effective in tact production. Music can be incorporated into the ABA VB training method, and musical stimuli can be used as successfully as ABA VB speech training to enhance the functional verbal production in children with ASD.
Jordan, Nika; Zakrajšek, Jure; Bohanec, Simona; Roškar, Robert; Grabnar, Iztok
2018-05-01
The aim of the present research is to show that the methodology of Design of Experiments can be applied to stability data evaluation, as they can be seen as multi-factor and multi-level experimental designs. Linear regression analysis is usually an approach for analyzing stability data, but multivariate statistical methods could also be used to assess drug stability during the development phase. Data from a stability study for a pharmaceutical product with hydrochlorothiazide (HCTZ) as an unstable drug substance was used as a case example in this paper. The design space of the stability study was modeled using Umetrics MODDE 10.1 software. We showed that a Partial Least Squares model could be used for a multi-dimensional presentation of all data generated in a stability study and for determination of the relationship among factors that influence drug stability. It might also be used for stability predictions and potentially for the optimization of the extent of stability testing needed to determine shelf life and storage conditions, which would be time and cost-effective for the pharmaceutical industry.
DEFF Research Database (Denmark)
Samuelsson, Jerker; Delre, Antonio; Tumlin, Susanne
2018-01-01
Plant-integrated and on-site gas emissions were quantified from a Swedish wastewater treatment plant by applying several optical analytical techniques and measurement methods. Plant-integrated CH4 emission rates, measured using mobile ground-based remote sensing methods, varied between 28.5 and 33.......5 kg CH4 h−1, corresponding to an average emission factor of 5.9% as kg CH4 (kg CH4production) −1, whereas N2O emissions varied between 4.0 and 6.4 kg h−1, corresponding to an average emission factor of 1.5% as kg N2O-N (kg TN influent) −1. Plant-integrated NH3 emissions were around 0.4 kg h−1...... quantifications were approximately two-thirds of the plant-integrated emission quantifications, which may be explained by the different timeframes of the approaches and that not all emission sources were identified during on-site investigation. Off-site gas emission quantifications, using ground-based remote...
Energy Technology Data Exchange (ETDEWEB)
Gómez-Zarzuela, C.; Miró, R.; Verdú, G. [Institute for Industrial Safety, Radiology and Environmental (ISIRYM), Universitat Politècnica de València (Spain); Peña-Monferrer, C.; Chiva, S. [Department of Mechanical Engineering and Construction, Universitat Jaume I, Castellón de la Plana (Spain); Muñoz-Cobo, J.L., E-mail: congoque@iqn.upv.es, E-mail: cpena@uji.es [Institute for Energy Engineering, Universitat Politècnica de València (Spain)
2017-07-01
Two-phase flow simulation has been an extended research topic over the years due to the importance of predicting with accuracy the flow behavior within different installations, including nuclear power plants. Some of them are low pressure events, like low water pressure injection, nuclear refueling or natural circulation. This work is devoted to investigate the level of accuracy of the results when a two-phase flow experiment, which has been carried out at low pressure, is performed in a one-dimensional simulation code. In particular, the codes that have been selected to represent the experiment are the best-estimate system codes RELAP5/MOD3 and TRACE v5.0 patch4. The experiment consists in a long vertical pipe along which an air-water fluid in bubbly regime moves upwards in adiabatic conditions and atmospheric pressure. The simulations have been first performed in both codes with their original correlations, which are based on the drift flux model for the case of bubbly regime in vertical pipes. Then, a different implementation for the drag force has been undertaken, in order to perform a simulation with equivalent bubble diameter to the experiment. Results show that the calculation obtained from the codes are within the ranges of validity of the experiment with some discrepancies, which leads to the conclusion that the use of a drag correlation approach is more realistic than drift flux model. (author)
International Nuclear Information System (INIS)
Gómez-Zarzuela, C.; Miró, R.; Verdú, G.; Peña-Monferrer, C.; Chiva, S.; Muñoz-Cobo, J.L.
2017-01-01
Two-phase flow simulation has been an extended research topic over the years due to the importance of predicting with accuracy the flow behavior within different installations, including nuclear power plants. Some of them are low pressure events, like low water pressure injection, nuclear refueling or natural circulation. This work is devoted to investigate the level of accuracy of the results when a two-phase flow experiment, which has been carried out at low pressure, is performed in a one-dimensional simulation code. In particular, the codes that have been selected to represent the experiment are the best-estimate system codes RELAP5/MOD3 and TRACE v5.0 patch4. The experiment consists in a long vertical pipe along which an air-water fluid in bubbly regime moves upwards in adiabatic conditions and atmospheric pressure. The simulations have been first performed in both codes with their original correlations, which are based on the drift flux model for the case of bubbly regime in vertical pipes. Then, a different implementation for the drag force has been undertaken, in order to perform a simulation with equivalent bubble diameter to the experiment. Results show that the calculation obtained from the codes are within the ranges of validity of the experiment with some discrepancies, which leads to the conclusion that the use of a drag correlation approach is more realistic than drift flux model. (author)
Böhning, Dankmar; Karasek, Sarah; Terschüren, Claudia; Annuß, Rolf; Fehr, Rainer
2013-03-09
Life expectancy is of increasing prime interest for a variety of reasons. In many countries, life expectancy is growing linearly, without any indication of reaching a limit. The state of North Rhine-Westphalia (NRW) in Germany with its 54 districts is considered here where the above mentioned growth in life expectancy is occurring as well. However, there is also empirical evidence that life expectancy is not growing linearly at the same level for different regions. To explore this situation further a likelihood-based cluster analysis is suggested and performed. The modelling uses a nonparametric mixture approach for the latent random effect. Maximum likelihood estimates are determined by means of the EM algorithm and the number of components in the mixture model are found on the basis of the Bayesian Information Criterion. Regions are classified into the mixture components (clusters) using the maximum posterior allocation rule. For the data analyzed here, 7 components are found with a spatial concentration of lower life expectancy levels in a centre of NRW, formerly an enormous conglomerate of heavy industry, still the most densely populated area with Gelsenkirchen having the lowest level of life expectancy growth for both genders. The paper offers some explanations for this fact including demographic and socio-economic sources. This case study shows that life expectancy growth is widely linear, but it might occur on different levels.
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.
Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J
2008-06-18
Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson
Probabilistic calibration of safety coefficients for flawed components in nuclear engineering
International Nuclear Information System (INIS)
Ardillon, E.; Pitner, P.; Barthelet, B.; Remond, A.
1995-01-01
The current rules applied to verify the flaws acceptance in nuclear components rely on deterministic criteria supposed to ensure the plant safe operation. The interest in have a precise and reliable method to evaluate the safety margins and the integrity of components led Electricite de France to launch an approach to link directly safety coefficients with safety levels. This paper presents a probabilistic methodology to calibrate safety coefficients in relation do reliability target values. The proposed calibration procedure applies to the case of a ferritic flawed pipe using the R 6 procedure for assessing the structure integrity. (author). 5 refs., 5 figs., 1 tab
Coury, Jennifer; Schneider, Jennifer L; Rivelli, Jennifer S; Petrik, Amanda F; Seibel, Evelyn; D'Agostini, Brieshon; Taplin, Stephen H; Green, Beverly B; Coronado, Gloria D
2017-06-19
The Plan-Do-Study-Act (PDSA) cycle is a commonly used improvement process in health care settings, although its documented use in pragmatic clinical research is rare. A recent pragmatic clinical research study, called the Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), used this process to optimize the research implementation of an automated colon cancer screening outreach program in intervention clinics. We describe the process of using this PDSA approach, the selection of PDSA topics by clinic leaders, and project leaders' reactions to using PDSA in pragmatic research. STOP CRC is a cluster-randomized pragmatic study that aims to test the effectiveness of a direct-mail fecal immunochemical testing (FIT) program involving eight Federally Qualified Health Centers in Oregon and California. We and a practice improvement specialist trained in the PDSA process delivered structured presentations to leaders of these centers; the presentations addressed how to apply the PDSA process to improve implementation of a mailed outreach program offering colorectal cancer screening through FIT tests. Center leaders submitted PDSA plans and delivered reports via webinar at quarterly meetings of the project's advisory board. Project staff conducted one-on-one, 45-min interviews with project leads from each health center to assess the reaction to and value of the PDSA process in supporting the implementation of STOP CRC. Clinic-selected PDSA activities included refining the intervention staffing model, improving outreach materials, and changing workflow steps. Common benefits of using PDSA cycles in pragmatic research were that it provided a structure for staff to focus on improving the program and it allowed staff to test the change they wanted to see. A commonly reported challenge was measuring the success of the PDSA process with the available electronic medical record tools. Understanding how the PDSA process can be applied to pragmatic
Calibration factor or calibration coefficient?
International Nuclear Information System (INIS)
Meghzifene, A.; Shortt, K.R.
2002-01-01
Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)
International Nuclear Information System (INIS)
Koohkan, Mohammad Reza
2012-01-01
Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds
Computation of Clebsch-Gordan and Gaunt coefficients using binomial coefficients
International Nuclear Information System (INIS)
Guseinov, I.I.; Oezmen, A.; Atav, Ue
1995-01-01
Using binomial coefficients the Clebsch-Gordan and Gaunt coefficients were calculated for extremely large quantum numbers. The main advantage of this approach is directly calculating these coefficients, instead of using recursion relations. Accuracy of the results is quite high for quantum numbers l 1 , and l 2 up to 100. Despite direct calculation, the CPU times are found comparable with those given in the related literature. 11 refs., 1 fig., 2 tabs
Energy Technology Data Exchange (ETDEWEB)
Abbassi, Yasser, E-mail: y.abbassi@mihanmail.ir [Department of Engineering, University of Shahid Beheshti, Tehran (Iran, Islamic Republic of); Asgarian, Shahla [Department of Chemical Engineering, Isfahan University, Tehran (Iran, Islamic Republic of); Ghahremani, Esmaeel; Abbasi, Mohammad [Department of Engineering, University of Shahid Beheshti, Tehran (Iran, Islamic Republic of)
2016-12-01
Highlights: • We carried out a CFD study to investigate transient natural convection in MNSR. • We applied porous media approach to simplify the complex core of MNSR. • Method have been verified with experimental data. • Temperature difference between the core inlet and outlet has been obtained. • Flow pattern and temperature distribution have been presented. - Abstract: The small and complex core of Isfahan Miniature Neutron Source Reactor (MNSR) in addition to its large tank makes a parametric study of natural convection difficult to perform in aspects of time and computational resources. In this study, in order to overcome this obstacle the porous media approximation has been used. This numerical technique includes two steps, (a) calculation of porous media variables such as porosity and pressure drops in the core region, (b) simulation of natural convection in the reactor tank by assuming the core region as a porous medium. Simulation has been carried out with ANSYS FLUENT® Academic Research, Release 16.2. The core porous medium resistance factors have been estimated to be, D{sub ij} = 1850 [1/m] and C{sub ij} = 415 [1/m{sup 2}]. Natural Convection simulation with Boussinesq approximation and variable property assumption have been performed. The experimental data and nuclear codes available in the literature, have verified the method. The average temperature difference between the experimental data and this study results was less than 0.5 °C and 2.0 °C for property variable technique and Boussinesq approximation, respectively. Temperature distribution and flow pattern in the entire reactor have been obtained. Results have shown that the temperature difference between core outlet and inlet is about 18°C and in this situation flow rate is about 0.004 kg/s. A full parametric study could be the topic of future investigations.
Larner, A J
2016-01-01
Calculation of correlation coefficients is often undertaken as a way of comparing different cognitive screening instruments (CSIs). However, test scores may correlate but not agree, and high correlation may mask lack of agreement between scores. The aim of this study was to use the methodology of Bland and Altman to calculate limits of agreement between the scores of selected CSIs and contrast the findings with Pearson's product moment correlation coefficients between the test scores of the same instruments. Datasets from three pragmatic diagnostic accuracy studies which examined the Mini-Mental State Examination (MMSE) vs. the Montreal Cognitive Assessment (MoCA), the MMSE vs. the Mini-Addenbrooke's Cognitive Examination (M-ACE), and the M-ACE vs. the MoCA were analysed to calculate correlation coefficients and limits of agreement between test scores. Although test scores were highly correlated (all >0.8), calculated limits of agreement were broad (all >10 points), and in one case, MMSE vs. M-ACE, was >15 points. Correlation is not agreement. Highly correlated test scores may conceal broad limits of agreement, consistent with the different emphases of different tests with respect to the cognitive domains examined. Routine incorporation of limits of agreement into diagnostic accuracy studies which compare different tests merits consideration, to enable clinicians to judge whether or not their agreement is close. © 2016 S. Karger AG, Basel.
Apuani, Tiziana; Corazzato, Claudia
2015-04-01
instability-related numerical ratings are assigned to classes. An instability index map is then produced by assigning, to each areal elementary cell (in our case a 10 m pixel), the sum of the products of each weight factor to the normalized parameter rating coming from each input zonation map. This map is then opportunely classified in landslide susceptibility classes (expressed as a percentage), enabling to discriminate areas prone to instability. Overall, the study area is characterized by a low propensity to slope instability. Few areas have an instability index of more than 45% of the theoretical maximum imposed by the matrix. These are located in the few steep slopes associated with active faults, and strongly depending on the seismic activity. Some other areas correspond to limited outcrops characterized by significantly reduced lithotechnical properties (low shear strength). The produced susceptibility map combines the application of the RES with the parameter zonation, following methodology which had never been applied up to now in in active volcanic environments. The comparison of the results with the ground deformation evidence coming from monitoring networks suggests the validity of the approach.
Determination of the surface drag coefficient
DEFF Research Database (Denmark)
Mahrt, L.; Vickers, D.; Sun, J.L.
2001-01-01
This study examines the dependence of the surface drag coefficient on stability, wind speed, mesoscale modulation of the turbulent flux and method of calculation of the drag coefficient. Data sets over grassland, sparse grass, heather and two forest sites are analyzed. For significantly unstable...... conditions, the drag coefficient does not depend systematically on z/L but decreases with wind speed for fixed intervals of z/L, where L is the Obukhov length. Even though the drag coefficient for weak wind conditions is sensitive to the exact method of calculation and choice of averaging time, the decrease...... of the drag coefficient with wind speed occurs for all of the calculation methods. A classification of flux calculation methods is constructed, which unifies the most common previous approaches. The roughness length corresponding to the usual Monin-Obukhov stability functions decreases with increasing wind...
Attenuation coefficients of soils
International Nuclear Information System (INIS)
Martini, E.; Naziry, M.J.
1989-01-01
As a prerequisite to the interpretation of gamma-spectrometric in situ measurements of activity concentrations of soil radionuclides the attenuation of 60 to 1332 keV gamma radiation by soil samples varying in water content and density has been investigated. A useful empirical equation could be set up to describe the dependence of the mass attenuation coefficient upon photon energy for soil with a mean water content of 10%, with the results comparing well with data in the literature. The mean density of soil in the GDR was estimated at 1.6 g/cm 3 . This value was used to derive the linear attenuation coefficients, their range of variation being 10%. 7 figs., 5 tabs. (author)
Bayesian Meta-Analysis of Coefficient Alpha
Brannick, Michael T.; Zhang, Nanhua
2013-01-01
The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…
DEFF Research Database (Denmark)
You, Shi; Hu, Junjie; Ziras, Charalampos
2016-01-01
The design and implementation of management policies for plug-in electric vehicles (PEVs) need to be supported by a holistic understanding of the functional processes, their complex interactions, and their response to various changes. Models developed to represent different functional processes...... and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators......, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging...
Virial Coefficients for the Liquid Argon
Korth, Micheal; Kim, Saesun
2014-03-01
We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.
Energy Technology Data Exchange (ETDEWEB)
Stojanovic, B.; Hallberg, D.; Akander, J. [Building Materials Technology, KTH Research School, Centre for Built Environment, University of Gaevle, SE-801 76 Gaevle (Sweden)
2010-10-15
This paper presents the thermal modelling of an unglazed solar collector (USC) flat panel, with the aim of producing a detailed yet swift thermal steady-state model. The model is analytical, one-dimensional (1D) and derived by a fin-theory approach. It represents the thermal performance of an arbitrary duct with applied boundary conditions equal to those of a flat panel collector. The derived model is meant to be used for efficient optimisation and design of USC flat panels (or similar applications), as well as detailed thermal analysis of temperature fields and heat transfer distributions/variations at steady-state conditions; without requiring a large amount of computational power and time. Detailed surface temperatures are necessary features for durability studies of the surface coating, hence the effect of coating degradation on USC and system performance. The model accuracy and proficiency has been benchmarked against a detailed three-dimensional Finite Difference Model (3D FDM) and two simpler 1D analytical models. Results from the benchmarking test show that the fin-theory model has excellent capabilities of calculating energy performances and fluid temperature profiles, as well as detailed material temperature fields and heat transfer distributions/variations (at steady-state conditions), while still being suitable for component analysis in junction to system simulations as the model is analytical. The accuracy of the model is high in comparison to the 3D FDM (the prime benchmark), as long as the fin-theory assumption prevails (no 'or negligible' temperature gradient in the fin perpendicularly to the fin length). Comparison with the other models also shows that when the USC duct material has a high thermal conductivity, the cross-sectional material temperature adopts an isothermal state (for the assessed USC duct geometry), which makes the 1D isothermal model valid. When the USC duct material has a low thermal conductivity, the heat transfer
Relativistic neoclassical transport coefficients with momentum correction
International Nuclear Information System (INIS)
Marushchenko, I.; Azarenkov, N.A.
2016-01-01
The parallel momentum correction technique is generalized for relativistic approach. It is required for proper calculation of the parallel neoclassical flows and, in particular, for the bootstrap current at fusion temperatures. It is shown that the obtained system of linear algebraic equations for parallel fluxes can be solved directly without calculation of the distribution function if the relativistic mono-energetic transport coefficients are already known. The first relativistic correction terms for Braginskii matrix coefficients are calculated.
International Nuclear Information System (INIS)
Zhang Yi; Wei Wei-Wei; Cheng Teng-Fei; Song Yang
2011-01-01
In this paper, we apply the binary Bell polynomial approach to high-dimensional variable-coefficient nonlinear evolution equations. Taking the generalized (2+1)-dimensional KdV equation with variable coefficients as an illustrative example, the bilinear formulism, the bilinear Bäcklund transformation and the Lax pair are obtained in a quick and natural manner. Moreover, the infinite conservation laws are also derived. (general)
ANL results for LMFR reactivity coefficients benchmark
International Nuclear Information System (INIS)
Hill, Robert
2000-01-01
The fast reactor analysis methods developed at ANL were extensively tested in ZPR and ZPPR experiments, applied to EBR-2 and FFTF test reactors. The basic nuclear data library used was ENDF/B-V.2 with the ETOE-2 data processing code and the ENDF/B-VI. Multigroup constants were generated by Monte Carlo code MCNP 2 -2. Neutron flux calculation were done by DIF3D code applying neutron diffusion theory and finite difference method. The results obtained include basic parameters; fuel and structure regional Doppler coefficients; geometry expansion fuel coefficients; kinetics parameters. In general, agreement between phase 1 and 2 results were excellent
The Truth About Ballistic Coefficients
Courtney, Michael; Courtney, Amy
2007-01-01
The ballistic coefficient of a bullet describes how it slows in flight due to air resistance. This article presents experimental determinations of ballistic coefficients showing that the majority of bullets tested have their previously published ballistic coefficients exaggerated from 5-25% by the bullet manufacturers. These exaggerated ballistic coefficients lead to inaccurate predictions of long range bullet drop, retained energy and wind drift.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Anikushina, V.; Taratukhin, V.; Stutterheim, C. v.; Gushin, V.
2018-02-01
A new psycholinguistic view on the crew communication, combined with biochemical and psychological data, contributes to noninvasive methods for stress appraisal and proposes alternative approaches to improve in-group communication and cohesion.
National Research Council Canada - National Science Library
Shafer, Deborah
2002-01-01
... in a region The approach was initially designed to be used in the context of the Clean Water Act Section 404 Regulatory Program permit review sequence to consider alternatives, minimize impacts, assess...
Robertson, Peter J.
2015-01-01
Amartya Sen's capability approach characterizes an individual's well-being in terms of what they are able to be, and what they are able to do. This framework for thinking has many commonalities with the core ideas in career guidance. Sen's approach is abstract and not in itself a complete or explanatory theory, but a case can be…
A Simple Measure of Price Adjustment Coefficients.
Damodaran, Aswath
1993-01-01
One measure of market efficiency is the speed with which prices adjust to new information. The author develops a simple approach to estimating these price adjustment coefficients by using the information in return processes. This approach is used to estimate t he price adjustment coefficients for firms listed on the NYSE and the A MEX as well as for over-the-counter stocks. The author finds evidence of a lagged adjustment to new information in shorter return intervals for firms in all market ...
Multiphoton absorption coefficients in solids: an universal curve
International Nuclear Information System (INIS)
Brandi, H.S.; Araujo, C.B. de
1983-04-01
An universal curve for the frequency dependence of the multiphoton absorption coefficient is proposed based on a 'non-perturbative' approach. Specific applications have been made to obtain two, three, four and five photons absorption coefficient in different materials. Properly scaling of the two photon absorption coefficient and the use of the universal curve yields results for the higher order absorption coefficients in good agreement with the experimental data. (Author) [pt
Schneider, Claudia; Arnot, Madeleine
2018-01-01
This article explores the modes of school communication associated with language and cultural diversity, demonstrating how organisational communication theory can be applied to the analysis of schools' communication responses to the presence of pupils who have English as an additional language (EAL). The article highlights three analytical…
Yonker, Julie E.
2011-01-01
With the advent of online test banks and large introductory classes, instructors have often turned to textbook publisher-generated multiple-choice question (MCQ) exams in their courses. Multiple-choice questions are often divided into categories of factual or applied, thereby implicating levels of cognitive processing. This investigation examined…
Errami, Youssef; Obbadi, Abdellatif; Sahnoun, Smail; Ouassaid, Mohammed; Maaroufi, Mohamed
2018-05-01
This paper proposes a Direct Torque Control (DTC) method for Wind Power System (WPS) based Permanent Magnet Synchronous Generator (PMSG) and Backstepping approach. In this work, generator side and grid-side converter with filter are used as the interface between the wind turbine and grid. Backstepping approach demonstrates great performance in complicated nonlinear systems control such as WPS. So, the control method combines the DTC to achieve Maximum Power Point Tracking (MPPT) and Backstepping approach to sustain the DC-bus voltage and to regulate the grid-side power factor. In addition, control strategy is developed in the sense of Lyapunov stability theorem for the WPS. Simulation results using MATLAB/Simulink validate the effectiveness of the proposed controllers.
Read, B.; Blok, H.E.; de Vries, A.P.; Blanken, Henk; Apers, Peter M.G.
Data abstraction and query processing techniques are usually studied in the domain of administrative applications. We present a case-study in the non-standard domain of (multimedia) information retrieval, mainly intended as a feasibility study in favor of the `database approach' to data management.
Goldingay, S.; Dieppe, P.; Mangan, M.; Marsden, D.
2014-01-01
This critical reflection is based on the belief that creative practitioners should be using their own well-established approaches to trouble dominant paradigms in health and care provision to both form and inform the future of healing provision and well-being creation. It describes work by a transdisciplinary team (drama and medicine) that is…
Zabinski, Joseph W; Garcia-Vargas, Gonzalo; Rubio-Andrade, Marisela; Fry, Rebecca C; Gibson, Jacqueline MacDonald
2016-05-10
Dose-response functions used in regulatory risk assessment are based on studies of whole organisms and fail to incorporate genetic and metabolomic data. Bayesian belief networks (BBNs) could provide a powerful framework for incorporating such data, but no prior research has examined this possibility. To address this gap, we develop a BBN-based model predicting birthweight at gestational age from arsenic exposure via drinking water and maternal metabolic indicators using a cohort of 200 pregnant women from an arsenic-endemic region of Mexico. We compare BBN predictions to those of prevailing slope-factor and reference-dose approaches. The BBN outperforms prevailing approaches in balancing false-positive and false-negative rates. Whereas the slope-factor approach had 2% sensitivity and 99% specificity and the reference-dose approach had 100% sensitivity and 0% specificity, the BBN's sensitivity and specificity were 71% and 30%, respectively. BBNs offer a promising opportunity to advance health risk assessment by incorporating modern genetic and metabolomic data.
Applying DoE's Graded Approach for assessing radiation impacts to non-human biota at the Incl
International Nuclear Information System (INIS)
Morris, Randall C.
2006-01-01
In July 2002, The US Department of Energy (DOE) released a new technical standard entitled A Graded Approach for Evaluating Radiation Doses to Aquatic and Terrestrial Biota. DOE facilities are annually required to demonstrate that routine radioactive releases from their sites are protective of non-human receptors and sites are encouraged to use the Graded Approach for this purpose. Use of the Graded Approach requires completion of several preliminary steps, to evaluate the degree to which the site environmental monitoring program is appropriate for evaluating impacts to non-human biota. We completed these necessary activities at the Idaho National Laboratory (INL) using the following four tasks: (1) develop conceptual models and evaluate exposure pathways; (2) define INL evaluation areas; (3) evaluate sampling locations and media; (4) evaluate data gaps. All of the information developed in the four steps was incorporated, data sources were identified, departures from the Graded Approach were justified, and a step-by-step procedure for biota dose assessment at the INL was specified. Finally, we completed a site-wide biota dose assessment using the 2002 environmental surveillance data and an offsite assessment using soil and surface water data collected since 1996. These assessments demonstrated the environmental concentrations of radionuclides measured on and near the INL do not present significant risks to populations of non-human biota
Directory of Open Access Journals (Sweden)
Amir Saffari
2013-12-01
Full Text Available Subject analysis of the potential of spontaneous combustion in coal layers with analytical and numerical methods has been always considered as a difficult task because of the complexity of the coal behavior and the number of factors influencing it. Empirical methods, due to accounting for certain and specific factors, have not accuracy and efficiency for all positions. The Rock Engineering Systems (RES approach as a systematic method for analyzing and classifying is proposed in engineering projects. The present study is concerned with employing the RES approach to categorize coal spontaneous combustion in coal regions. Using this approach, the interaction of parameters affecting each other in an equal scale on the coal spontaneous combustion was evaluated. The Intrinsic, geological and mining characteristics of coal seams were studied in order to identifying important parameters. Then, the main stages of implementation of the RES method i.e. interaction matrix formation, coding matrix and forming a list category were performed. Later, an index of Coal Spontaneous Combustion Potential (CSCPi was determined to format the mathematical equation. Then, the obtained data related to the intrinsic, geological and mining, and special index were calculated for each layer in the case study (Pashkalat coal region, Iran. So, the study offers a perfect and comprehensive classification of the layers. Finally, by using the event of spontaneous combustion occurred in Pashkalat coal region, an initial validation for this systematic approach in the study area was conducted, which suggested relatively good concordance in Pashkalat coal region.
Gholam, Alain
2017-01-01
Visual thinking routines are principles based on several theories, approaches, and strategies. Such routines promote thinking skills, call for collaboration and sharing of ideas, and above all, make thinking and learning visible. Visual thinking routines were implemented in the teaching methodology graduate course at the American University in…
Directory of Open Access Journals (Sweden)
Shi You
2016-11-01
Full Text Available The design and implementation of management policies for plug-in electric vehicles (PEVs need to be supported by a holistic understanding of the functional processes, their complex interactions, and their response to various changes. Models developed to represent different functional processes and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions of using existing, or developing their own, solutions for further applications.
Probabilistic calibration of safety coefficients for flawed components in nuclear engineering
International Nuclear Information System (INIS)
Ardillon, E.; Pitner, P.; Barthelet, B.; Remond, A.
1996-01-01
The rules that are currently under application to verify the acceptance of flaws in nuclear components rely on deterministic criteria supposed to ensure the safe operating of plants. The interest of having a precise and reliable method to evaluate the safety margins and the integrity of components led Electricite de France to launch an approach to link directly safety coefficients with safety levels. This paper presents a probabilistic methodology to calibrate safety coefficients in relation to reliability target values. The proposed calibration procedure applies to the case of a ferritic flawed pipe using the R6 procedure for assessing the integrity of the structure. (authors). 5 refs., 5 figs
Drag Coefficient Estimation in Orbit Determination
McLaughlin, Craig A.; Manee, Steve; Lichtenberg, Travis
2011-07-01
Drag modeling is the greatest uncertainty in the dynamics of low Earth satellite orbits where ballistic coefficient and density errors dominate drag errors. This paper examines fitted drag coefficients found as part of a precision orbit determination process for Stella, Starlette, and the GEOSAT Follow-On satellites from 2000 to 2005. The drag coefficients for the spherical Stella and Starlette satellites are assumed to be highly correlated with density model error. The results using MSIS-86, NRLMSISE-00, and NRLMSISE-00 with dynamic calibration of the atmosphere (DCA) density corrections are compared. The DCA corrections were formulated for altitudes of 200-600 km and are found to be inappropriate when applied at 800 km. The yearly mean fitted drag coefficients are calculated for each satellite for each year studied. The yearly mean drag coefficients are higher for Starlette than Stella, where Starlette is at a higher altitude. The yearly mean fitted drag coefficients for all three satellites decrease as solar activity decreases after solar maximum.
Åstrøm, Anne N; Lie, Stein Atle; Gülcan, Ferda
2018-05-31
Understanding factors that affect dental attendance behavior helps in constructing effective oral health campaigns. A socio-cognitive model that adequately explains variance in regular dental attendance has yet to be validated among younger adults in Norway. Focusing a representative sample of younger Norwegian adults, this cross-sectional study provided an empirical test of the Theory of Planned Behavior (TPB) augmented with descriptive norm and action planning and estimated direct and indirect effects of attitudes, subjective norms, descriptive norms, perceived behavioral control and action planning on intended and self-reported regular dental attendance. Self-administered questionnaires provided by 2551, 25-35 year olds, randomly selected from the Norwegian national population registry were used to assess socio-demographic factors, dental attendance as well as the constructs of the augmented TPB model (attitudes, subjective norms, descriptive norms, intention, action planning). A two-stage process of structural equation modelling (SEM) was used to test the augmented TPB model. Confirmatory factor analysis, CFA, confirmed the proposed correlated 6-factor measurement model after re-specification. SEM revealed that attitudes, perceived behavioral control, subjective norms and descriptive norms explained intention. The corresponding standardized regression coefficients were respectively (β = 0.70), (β =0.18), (β = - 0.17) and (β =0.11) (p planning and action planning (β =0.19) predicted dental attendance behavior (p behavioral control on behavior through action planning and through intention and action planning, respectively. The final model explained 64 and 41% of the total variance in intention and dental attendance behavior. The findings support the utility of the TPB, the expanded normative component and action planning in predicting younger adults' intended- and self-reported dental attendance. Interventions targeting young adults' dental
On the Kendall Correlation Coefficient
Stepanov, Alexei
2015-01-01
In the present paper, we first discuss the Kendall rank correlation coefficient. In continuous case, we define the Kendall rank correlation coefficient in terms of the concomitants of order statistics, find the expected value of the Kendall rank correlation coefficient and show that the later is free of n. We also prove that in continuous case the Kendall correlation coefficient converges in probability to its expected value. We then propose to consider the expected value of the Kendall rank ...
International Nuclear Information System (INIS)
Solares, G.R.; Zamenhof, R.G.
1995-01-01
A novel approach to the microdosimetry of neutron capture therapy has been developed using high-resolution quantitative autoradiography (HRQAR) and two-dimensional Monte Carlo simulation. This approach has been applied using actual cell morophology (nuclear and cytoplasmic cell structures) and the measured microdistribution of boron-10 in a transplanted murine brain tumor (GL261) containing p-boronophenylalanine (BPA) as the boron compound. The 2D Monte Carlo transport code for the α and 7 Li charged particles from the 10 B(n,α) 7 Li reactions has been developed as a surrogate to a full 3D approach to calculate a variety of different microdosimetric parameters. The HRQAR method and the surrogate 2D Monte Carlo approach are described in detail and examples of their use are presented. 27 refs., 11 figs., 1 tab
Verveer, P. J; Gemkow, M. J; Jovin, T. M
1999-01-01
We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.
Hattori, Y.; Ushiki, H.; Engl, W.; Courbin, L.; Panizza, P.
2005-08-01
Within the framework of an effective medium approach and a mean-field approximation, we present a simple lattice model to treat electrical percolation in the presence of attractive interactions. We show that the percolation line depends on the magnitude of interactions. In 2 dimensions, the percolation line meets the binodal line at the critical point. A good qualitative agreement is observed with experimental results on a ternary AOT-based water-in-oil microemulsion system.
Directory of Open Access Journals (Sweden)
E. Larour
2016-11-01
Full Text Available Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly. However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1 carry out type changing through the ISSM, hence facilitating operator overloading, (2 bind to external solvers such as MUMPS and GSL-LU, and (3 handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.
Genser, Bernd; Fischer, Joachim E; Figueiredo, Camila A; Alcântara-Neves, Neuza; Barreto, Mauricio L; Cooper, Philip J; Amorim, Leila D; Saemann, Marcus D; Weichhart, Thomas; Rodrigues, Laura C
2016-05-20
Immunologists often measure several correlated immunological markers, such as concentrations of different cytokines produced by different immune cells and/or measured under different conditions, to draw insights from complex immunological mechanisms. Although there have been recent methodological efforts to improve the statistical analysis of immunological data, a framework is still needed for the simultaneous analysis of multiple, often correlated, immune markers. This framework would allow the immunologists' hypotheses about the underlying biological mechanisms to be integrated. We present an analytical approach for statistical analysis of correlated immune markers, such as those commonly collected in modern immuno-epidemiological studies. We demonstrate i) how to deal with interdependencies among multiple measurements of the same immune marker, ii) how to analyse association patterns among different markers, iii) how to aggregate different measures and/or markers to immunological summary scores, iv) how to model the inter-relationships among these scores, and v) how to use these scores in epidemiological association analyses. We illustrate the application of our approach to multiple cytokine measurements from 818 children enrolled in a large immuno-epidemiological study (SCAALA Salvador), which aimed to quantify the major immunological mechanisms underlying atopic diseases or asthma. We demonstrate how to aggregate systematically the information captured in multiple cytokine measurements to immunological summary scores aimed at reflecting the presumed underlying immunological mechanisms (Th1/Th2 balance and immune regulatory network). We show how these aggregated immune scores can be used as predictors in regression models with outcomes of immunological studies (e.g. specific IgE) and compare the results to those obtained by a traditional multivariate regression approach. The proposed analytical approach may be especially useful to quantify complex immune
Pol, Rafel; Hristovski, Robert; Medina, Daniel; Balague, Natalia
2018-04-19
A better understanding of how sports injuries occur in order to improve their prevention is needed for medical, economic, scientific and sports success reasons. This narrative review aims to explain the mechanisms that underlie the occurrence of sports injuries, and an innovative approach for their prevention on the basis of complex dynamic systems approach. First, we explain the multilevel organisation of living systems and how function of the musculoskeletal system may be impaired. Second, we use both, a constraints approach and a connectivity hypothesis to explain why and how the susceptibility to sports injuries may suddenly increase. Constraints acting at multiple levels and timescales replace the static and linear concept of risk factors, and the connectivity hypothesis brings an understanding of how the accumulation of microinjuries creates a macroscopic non-linear effect, that is, how a common motor action may trigger a severe injury. Finally, a recap of practical examples and challenges for the future illustrates how the complex dynamic systems standpoint, changing the way of thinking about sports injuries, offers innovative ideas for improving sports injury prevention. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Eweys, Omar Ali; Elwan, Abeer A.; Borham, Taha I.
2017-12-01
This manuscript proposes an approach for estimating soil moisture content over corn fields using C-band SAR data acquired by RADARSAT-2 satellite. An image based approach is employed to remove the vegetation contribution to the satellite signals. In particular, the absolute difference between like and cross polarized signals (ADLC) is employed for segmenting the canopy growth cycle into tiny stages. Each stage is represented by a Cumulative Distribution Function (CDF) of the like polarized signals. For periods of bare soils and vegetation cover, CDFs are compared and the vegetation contribution is quantified. The portion which represent the soil contributions (σHHsoil°) to the satellite signals; are employed for inversely running Oh model and the water cloud model for estimating soil moisture, canopy water content and canopy height respectively. The proposed approach shows satisfactory performance where high correlation of determination (R2) is detected between the field observations and the corresponding retrieved soil moisture, canopy water content and canopy height (R2 = 0.64, 0.97 and 0.98 respectively). Soil moisture retrieval is associated with root mean square error (RMSE) of 0.03 m3 m-3 while estimating canopy water content and canopy height have RMSE of 0.38 kg m-2 and 0.166 m respectively.
Schaarup, Clara; Hejlesen, Ole Kristian
2016-01-01
Objective. The aim of the present study is to evaluate the usability of the telehealth system, coined Telekit, by using an iterative, mixed usability approach. Materials and Methods. Ten double experts participated in two heuristic evaluations (HE1, HE2), and 11 COPD patients attended two think-aloud tests. The double experts identified usability violations and classified them into Jakob Nielsen's heuristics. These violations were then translated into measurable values on a scale of 0 to 4 indicating degree of severity. In the think-aloud tests, COPD participants were invited to verbalise their thoughts. Results. The double experts identified 86 usability violations in HE1 and 101 usability violations in HE2. The majority of the violations were rated in the 0–2 range. The findings from the think-aloud tests resulted in 12 themes and associated examples regarding the usability of the Telekit system. The use of the iterative, mixed usability approach produced both quantitative and qualitative results. Conclusion. The iterative, mixed usability approach yields a strong result owing to the high number of problems identified in the tests because the double experts and the COPD participants focus on different aspects of Telekit's usability. This trial is registered with Clinicaltrials.gov, NCT01984840, November 14, 2013. PMID:27974888
International Nuclear Information System (INIS)
Sacco, Wagner F.; Machado, Marcelo D.; Pereira, Claudio M.N.A.; Schirru, Roberto
2004-01-01
This article extends previous efforts on genetic algorithms (GAs) applied to a core design optimization problem. We introduce the application of a new Niching Genetic Algorithm (NGA) to this problem and compare its performance to these previous works. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. After exhaustive experiments we observed that our new niching method performs better than the conventional GA due to a greater exploration of the search space
International Nuclear Information System (INIS)
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daulle, Thibault
2013-06-01
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
Directory of Open Access Journals (Sweden)
Jeanny Liu, PhD
2011-08-01
Full Text Available Students often struggle with how to translate textbook concepts into real-world applications that allow them to personally experience the importance of these concepts. This is an ongoing challenge within all disciplines in higher education. To address this, faculty design their courses using methods beyond traditional classroom lectures to facilitate and reinforce student learning. The authors believe that students who are given hands-on problem-solving opportunities are more likely to retain such knowledge and apply it outside the classroom, in the workplace, volunteer activities, and other personal pursuits. In an attempt to engage students and provide them with meaningful opportunities to apply course concepts, the authors have initiated a number of experiential learning methods in the classroom. Since fall of 2008, elements of problem-based learning were integrated in the authors’ business courses. Specifically, real-world consulting projects were introduced into their classrooms. This paper focuses on the authors’ experiences implementing problem-based learning processes and practical project assignments that actively engage students in the learning process. The experiences and the feedback gathered from students and executives who participated in the “realworld” project are reported in this paper.
Energy Technology Data Exchange (ETDEWEB)
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick [Institut Laue Langevin, Neutron Detector Service, Grenoble (France); Daulle, Thibault [PHELMA Grenoble - INP Grenoble (France)
2013-06-15
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
International Nuclear Information System (INIS)
Silva, Marcio H.; Schirru, Roberto; Medeiros, Jose A.C.C.
2009-01-01
Using concepts and principles of the quantum computation, as the quantum bit and superposition of states, coupled with the biological metaphor of a colony of ants, used in the Ant Colony Optimization algorithm (ACO), Wang et al developed the Quantum Ant Colony Optimization (QACO). In this paper we present a modification of the algorithm proposed by Wang et al. While the original QACO was used just for simple benchmarks functions with, at the most, two dimensions, QACO A lfa was developed for application where the original QACO, due to its tendency to converge prematurely, does not obtain good results, as in complex multidimensional functions. Furthermore, to evaluate its behavior, both algorithms are applied to the real problem of identification of accidents in PWR nuclear power plants. (author)
Directory of Open Access Journals (Sweden)
Paulo Elias Carneiro Pereira
Full Text Available Abstract The mineral exploration activity consists of a set of successive stages that are interdependent on each other, in which the main goal is to discover and subsequently evaluate a mineral deposit for the feasibility of its extraction. This process involves setting the shape, dimensions and grades for eventual production. Geological modeling determines the orebody's possible format in subsoil, which can be done by two approaches: vertical sections (deterministic methods or geostatistical methods. The latter approach is currently being preferred, as it is a more accurate alternative and therefore, more reliable for establishing the physical format of orebodies, especially in instances where geologic boundaries are soft and/or with widely spaced sample information. This study uses the concept of indicator kriging (IK to model the geologic boundaries of a limestone deposit located at Indiara city, Goiás State, Brazil. In general, the results indicated a good adherence in relation to samples. However, there are reasonable differences, particularly in lithological domains with a small number of samples in relation to the total amount sampled. Therefore, the results showed that there is a need for additional sampling to better delineate the geological contacts, especially between carbonate and non-carbonate rocks. Uncertainty maps confirmed this necessity and also indicated potential sites for future sampling; information that would not be obtained by usage of deterministic methods.
International Nuclear Information System (INIS)
Asplund, Erik; Kluener, Thorsten
2012-01-01
In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., (ℎ/2π)=m e =e=a 0 = 1, have been used unless otherwise stated.
Directory of Open Access Journals (Sweden)
Angela Shapiro
2014-08-01
Full Text Available The aim of the Gathering the Voices project is to gather testimonies from Holocaust survivors who have made their home in Scotland and to make these testimonies available on the World Wide Web. The project commenced in 2012, and a key outcome of the project is to educate current and future generations about the resilience of these survivors. Volunteers from the Jewish community are collaborating with staff and undergraduate students in Glasgow Caledonian University in developing innovative approaches to engage with school children. These multimedia approaches are essential, as future generations will be unable to interact in person with Holocaust survivors. By students being active participants in the project, they will learn more about the Holocaust and recognize the relevance of these testimonies in today’s society. Although some of the survivors have been interviewed about their journeys in fleeing from the Nazi atrocities, for all of the interviewees, this is the first time that they have been asked about their lives once they arrived in the United Kingdom. The interviews have also focused on citizenship and integration into society. The project is not yet completed, and an evaluation will be taking place to measure the effectiveness of the project in communicating its message to the public.
Asplund, Erik; Klüner, Thorsten
2012-03-28
In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = m(e) = e = a(0) = 1, have been used unless otherwise stated.
Directory of Open Access Journals (Sweden)
Cesare Biserni
2015-11-01
Full Text Available Especially in the last decade, efforts have been made in developing the sustainable building assessment tools, which are usually performed based on fundamentals of the First Law of Thermodynamics. However, this approach does not provide a faithful thermodynamic evaluation of the overall energy conversion processes that occur in buildings, and a more robust approach should be followed. The relevance of Second Law analysis has been here highlighted: in addition to the calculation of energy balances, the concept of exergy is used to evaluate the quality of energy sources, resulting in a higher flexibility of strategies to optimize a building design. Reviews of the progress being made with the constructal law show that diverse phenomena can be considered manifestations of the tendency towards optimization captured by the constructal law. The studies based on First and Second Principle of Thermodynamics results to be affected by the extreme generality of the two laws, which is consequent of the fact that in thermodynamics the “any system” is a black box with no information about design, organization and evolution. In this context, an exploratory analysis on the potentiality of constructal theory, that can be considered a law of thermodynamics, has been finally outlined in order to assess the energy performance in building design.
Directory of Open Access Journals (Sweden)
Gray Joanna
2010-11-01
Full Text Available Abstract Background We report an attempt to extend the previously successful approach of combining SNP (single nucleotide polymorphism microarrays and DNA pooling (SNP-MaP employing high-density microarrays. Whereas earlier studies employed a range of Affymetrix SNP microarrays comprising from 10 K to 500 K SNPs, this most recent investigation used the 6.0 chip which displays 906,600 SNP probes and 946,000 probes for the interrogation of CNVs (copy number variations. The genotyping assay using the Affymetrix SNP 6.0 array is highly demanding on sample quality due to the small feature size, low redundancy, and lack of mismatch probes. Findings In the first study published so far using this microarray on pooled DNA, we found that pooled cheek swab DNA could not accurately predict real allele frequencies of the samples that comprised the pools. In contrast, the allele frequency estimates using blood DNA pools were reasonable, although inferior compared to those obtained with previously employed Affymetrix microarrays. However, it might be possible to improve performance by developing improved analysis methods. Conclusions Despite the decreasing costs of genome-wide individual genotyping, the pooling approach may have applications in very large-scale case-control association studies. In such cases, our study suggests that high-quality DNA preparations and lower density platforms should be preferred.
Suparno, Sudomo, Rahardjo, Boedi
2017-09-01
Experts and practitioners agree that the quality of vocational high schools needs to be greatly improved. Many construction services have voiced their dissatisfaction with today's low-quality vocational high school graduates. The low quality of graduates is closely related to the quality of the teaching and learning process, particularly teaching materials. In their efforts to improve the quality of vocational high school education, the government have implemented Curriculum 2013 (K13) and supplied teaching materials. However, the results of monitoring and evaluation done by the Directorate of Vocational High School, Directorate General of Secondary Education (2014), the provision of tasks for students in the teaching materials was totally inadequate. Therefore, to enhance the quality and the result of the instructional process, there should be provided students' worksheets that can stimulate and improve students' problem-solving skills and soft skills. In order to develop worksheets that can meet the academic requirements, the development needs to be in accordance with an innovative learning approach, which is the soft skill-based scientific approach.
Directory of Open Access Journals (Sweden)
Julia Chernova
2016-07-01
Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Elbanna, Hesham M.; Carlson, Leland A.
1992-01-01
The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.
International Nuclear Information System (INIS)
Moinereau, D.; Brochard, J.; Guichard, D.; Bhandari, S.; Sherry, A.; France, C.
1996-10-01
A benchmark on the computational simulation of a cladded vessel with a 6.2 mm sub-clad flaw submitted to a thermal transient has been conducted. Two-dimensional elastic and elastic-plastic finite element computations of the vessel have been performed by the different partners with respective finite element codes ASTER (EDF), CASTEM 2000 (CEA), SYSTUS (Framatome) and ABAQUS (AEA Technology). Main results have been compared: temperature field in the vessel, crack opening, opening stress at crack tips, stress intensity factor in cladding and base metal, Weibull stress σ w and probability of failure in base metal, void growth rate R/R 0 in cladding. This comparison shows an excellent agreement on main results, in particular on results obtained with local approach. (K.A.)
Graf, Urs
2004-01-01
The theory of Laplace transformation is an important part of the mathematical background required for engineers, physicists and mathematicians. Laplace transformation methods provide easy and effective techniques for solving many problems arising in various fields of science and engineering, especially for solving differential equations. What the Laplace transformation does in the field of differential equations, the z-transformation achieves for difference equations. The two theories are parallel and have many analogies. Laplace and z transformations are also referred to as operational calculus, but this notion is also used in a more restricted sense to denote the operational calculus of Mikusinski. This book does not use the operational calculus of Mikusinski, whose approach is based on abstract algebra and is not readily accessible to engineers and scientists. The symbolic computation capability of Mathematica can now be used in favor of the Laplace and z-transformations. The first version of the Mathema...
Directory of Open Access Journals (Sweden)
Mariany W Lidia
2012-09-01
Full Text Available The fashion industry is the biggest contributor among the 14 creative industries in Indonesia. Nowadays many apparel companies are shifting toward the vertical integration. Since speed is everything to be successful in the apparel industry, fast fashion retailers must quickly respond to the market demand. This papers aims to develop a model of the supply chain of a small and medium scale enterprise (SME of an apparel company in Indonesia and to propose a decision support system using System Dynamics (SD and helps the management to identify the best business strategy. Simulated scenarios can help the management to identify the most appropriate policy to be applied in the future. Case study method was used in this research where data were collected from a typical fast fashion firm in Indonesia that produces its own wares ranging from raw materials to be ready-to-wear clothes, has three stores, a warehouse and is running online sales system. We analyses the result of many simulations in a fashion company from an operational point of view and from them we derive suggestions about the future business strategy in a small and medium fashion company in Indonesia. Keywords: system dynamics, fast fashion, supply chain management, SME, Indonesia
Duval, J; Coyette, F; Seron, X
2008-08-01
This paper describes and evaluates a programme of neuropsychological rehabilitation which aims to improve three sub-components of the working memory central executive: processing load, updating and dual-task monitoring, by the acquisition of three re-organisation strategies (double coding, serial processing and speed reduction). Our programme has two stages: cognitive rehabilitation (graduated exercises subdivided into three sub-programmes each corresponding to a sub-component) which enables the patient to acquire the three specific strategies; and an ecological rehabilitation, including analyses of scenarios and simulations of real-life situations, which aims to transfer the strategies learned to everyday life. The programme also includes information meetings. It was applied to a single case who had working memory deficits after a surgical operation for a cerebral tumour on his left internal temporal ganglioglioma. Multiple baseline tests were used to measure the effectiveness of the rehabilitation. The programme proved to be effective for all three working memory components; a generalisation of its effects to everyday life was observed, and the effects were undiminished three months later.
Christensen, Vibeke T; Carpiano, Richard M
2014-07-01
Research on social class differences in obesity and weight-related outcomes has highlighted the need to consider how such class differences reflect the unequally distributed constellations of economic, cultural, and social resources that enable and constrain health-related habits and practices or health lifestyles. Motivated by this need, the present study applies a theoretical perspective that integrates Cockerham's (2005) health lifestyles theory with Bourdieu's (1984) theoretical scholarship on social class, lifestyles, and the body to analyzing class-based differences in body mass index (BMI) among adult female respondents of a 2007 Danish national survey (n = 1376). We test hypotheses concerning how respective levels of economic, cultural, and social capital that constitute women's social class membership are associated with BMI directly and via their influence on respondent's dietary-related values, preferences, behaviors, and exercise activities. Our analyses indicate that cultural and economic capital were both directly associated with BMI. Mediation analyses revealed that greater cultural and social capital were linked to higher BMI via interest in cooking; while all three forms of capital were associated with lower BMI via greater frequency of exercise. These findings provide evidence for the many-and sometimes contradictory-ways that social class can influence body weight. Identifying such patterns can inform the design of more effective population health interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hess, R; Neubert, H; Seifert, A; Bierbaum, S; Hart, D A; Scharnweber, D
2012-12-01
The purpose of this study was to develop a new apparatus for in vitro studies applying low frequency electrical fields to cells without interfering side effects like biochemical reactions or magnetic fields which occur in currently available systems. We developed a non-invasive method by means of the principle of transformer-like coupling where the magnetic field is concentrated in a toroid and, therefore, does not affect the cell culture. Next to an extensive characterization of the electrical field parameters, initial cell culture studies have focused on examining the response of bone marrow-derived human mesenchymal stem cells (MSCs) to pulsed electrical fields. While no significant differences in the proliferation of human MSCs could be detected, significant increases in ALP activity as well as in gene expression of other osteogenic markers were observed. The results indicate that transformer-like coupled electrical fields can be used to influence osteogenic differentiation of human MSCs in vitro and can pose a useful tool in understanding the influence of electrical fields on the cellular and molecular level.
Barki, Anum; Kendricks, Kimberly; Tuttle, Ronald F.; Bunker, David J.; Borel, Christoph C.
2013-05-01
This research highlights the results obtained from applying the method of inverse kinematics, using Groebner basis theory, to the human gait cycle to extract and identify lower extremity gait signatures. The increased threat from suicide bombers and the force protection issues of today have motivated a team at Air Force Institute of Technology (AFIT) to research pattern recognition in the human gait cycle. The purpose of this research is to identify gait signatures of human subjects and distinguish between subjects carrying a load to those subjects without a load. These signatures were investigated via a model of the lower extremities based on motion capture observations, in particular, foot placement and the joint angles for subjects affected by carrying extra load on the body. The human gait cycle was captured and analyzed using a developed toolkit consisting of an inverse kinematic motion model of the lower extremity and a graphical user interface. Hip, knee, and ankle angles were analyzed to identify gait angle variance and range of motion. Female subjects exhibited the most knee angle variance and produced a proportional correlation between knee flexion and load carriage.
Pauer, Frédéric; Schmidt, Katharina; Babac, Ana; Damm, Kathrin; Frank, Martin; von der Schulenburg, J-Matthias Graf
2016-09-09
The Analytic Hierarchy Process (AHP) is increasingly used to measure patient priorities. Studies have shown that there are several different approaches to data acquisition and data aggregation. The aim of this study was to measure the information needs of patients having a rare disease and to analyze the effects of these different AHP approaches. The ranking of information needs is then used to display information categories on a web-based information portal about rare diseases according to the patient's priorities. The information needs of patients suffering from rare diseases were identified by an Internet research study and a preliminary qualitative study. Hence, we designed a three-level hierarchy containing 13 criteria. For data acquisition, the differences in outcomes were investigated using individual versus group judgements separately. Furthermore, we analyzed the different effects when using the median and arithmetic and geometric means for data aggregation. A consistency ratio ≤0.2 was determined to represent an acceptable consistency level. Forty individual and three group judgements were collected from patients suffering from a rare disease and their close relatives. The consistency ratio of 31 individual and three group judgements was acceptable and thus these judgements were included in the study. To a large extent, the local ranks for individual and group judgements were similar. Interestingly, group judgements were in a significantly smaller range than individual judgements. According to our data, the ranks of the criteria differed slightly according to the data aggregation method used. It is important to explain and justify the choice of an appropriate method for data acquisition because response behaviors differ according to the method. We conclude that researchers should select a suitable method based on the thematic perspective or investigated topics in the study. Because the arithmetic mean is very vulnerable to outliers, the geometric mean
Vouillamoz, J.-M.; Hoareau, J.; Grammare, M.; Caron, D.; Nandagiri, L.; Legchenko, A.
2012-11-01
Many human communities living in coastal areas in Africa and Asia rely on thin freshwater lenses for their domestic supply. Population growth together with change in rainfall patterns and sea level will probably impact these vulnerable groundwater resources. Spatial knowledge of the aquifer properties and creation of a groundwater model are required for achieving a sustainable management of the resource. This paper presents a ready-to-use methodology for estimating the key aquifer properties and the freshwater resource based on the joint use of two non-invasive geophysical tools together with common hydrological measurements. We applied the proposed methodology in an unconfined aquifer of a coastal sandy barrier in South-Western India. We jointly used magnetic resonance and transient electromagnetic soundings and we monitored rainfall, groundwater level and groundwater electrical conductivity. The combined interpretation of geophysical and hydrological results allowed estimating the aquifer properties and mapping the freshwater lens. Depending on the location and season, we estimate the freshwater reserve to range between 400 and 700 L m-2 of surface area (± 50%). We also estimate the recharge using time lapse geophysical measurements with hydrological monitoring. After a rainy event close to 100% of the rain is reaching the water table, but the net recharge at the end of the monsoon is less than 10% of the rain. Thus, we conclude that a change in rainfall patterns will probably not impact the groundwater resource since most of the rain water recharging the aquifer is flowing towards the sea and the river. However, a change in sea level will impact both the groundwater reserve and net recharge.
International Nuclear Information System (INIS)
Khoshnevisan, Benyamin; Rafiee, Shahin; Omid, Mahmoud; Mousazadeh, Hossein
2013-01-01
In this study, DEA (data envelopment analysis) was applied to analyze the energy efficiency of wheat farms in order to separate efficient and inefficient growers and to calculate the wasteful uses of energy. Additionally, the degrees of TE (technical efficiency), PTE (pure technical efficiency) and SE (scale efficiency) were determined. Furthermore, the effect of energy optimization on GHG (greenhouse gas) emission was investigated and the total amount of GHG emission of efficient farms was compared with inefficient ones. Based on the results it was revealed that 18% of producers were technically efficient and the average of TE was calculated as 0.82. Based on the BCC (Banker–Charnes–Cooper) model 154 growers (59%) were identified efficient and the mean PTE of these farmers was found to be 0.99. Also, it was concluded that 2075.8 MJ ha −1 of energy inputs can be saved if the performance of inefficient farms rises to a high level. Additionally, it was observed that the total GHG emission from efficient and inefficient producers was 2713.3 and 2740.8 kg CO 2eq . ha −1 , respectively. By energy optimization the total GHG emission can be reduced to the value of 2684.29 kg CO 2eq . ha −1 . - Highlights: • 18% of producers were technically efficient and the average of TE was 0.82. • An average 2075.8 MJ ha −1 from energy input could be saved without reducing the yield. • GHG emission of efficient and inefficient producers was 2713.3 and 2740.8 kg CO 2eq. ha −1 . • Total GHG emission can be reduced to the value of 2684.29 kg CO 2eq. ha −1
Törrönen, Jukka; Tigerstedt, Christoffer
2018-04-01
The article applies actor network theory (ANT) to autobiographical data on alcohol dependence to explore what ANT can offer to the analysis of 'addiction stories'. By defining 'addiction' as a relational achievement, as the effect of elements acting together as a configuration of human and non-human actors, the article demonstrates how the moving and changing attachments of addiction can be dynamically analyzed with concepts of 'assemblage', 'mediator', 'tendency', 'translation', 'trajectory', 'immutable mobile', 'fluid' and 'bush fire'. The article shows how the reduction of alcohol dependence simply to genetic factors, neurobiological causes, personality disorders and self-medication constitutes an inadequate explanation. As 'meta theories', they illuminate addiction one-sidedly. Instead, as ANT pays attention to multiple heterogeneous mediators, it specifies in what way the causes identified in 'meta theories' may together with other actors participate in addiction assemblages. When following the development of addiction assemblages, we focus on situational sequences of action, in which human and non-human elements are linked to each other, and we trace how the relational shape of addiction changes from one sequence to another as a transforming assemblage of heterogeneous attachments that either maintain healthy subjectivities or destabilize them. The more attachments assemblages of addiction are able to make that are flexible and durable from one event to another, the stronger also the addiction-based subjectivities. Similarly, the fewer attachments that assemblages of addiction are able to keep in their various translations, the weaker the addiction-based subjectivities also become. An ANT-inspired analysis has a number of implications for the prevention and treatment of addiction: it suggests that in the prevention and treatment of addiction, the aim should hardly be to get rid of dependencies. Rather, the ambition should be the identification of attachments
Abedi Gheshlaghi, Hassan; Feizizadeh, Bakhtiar
2017-09-01
Landslides in mountainous areas render major damages to residential areas, roads, and farmlands. Hence, one of the basic measures to reduce the possible damage is by identifying landslide-prone areas through landslide mapping by different models and methods. The purpose of conducting this study is to evaluate the efficacy of a combination of two models of the analytical network process (ANP) and fuzzy logic in landslide risk mapping in the Azarshahr Chay basin in northwest Iran. After field investigations and a review of research literature, factors affecting the occurrence of landslides including slope, slope aspect, altitude, lithology, land use, vegetation density, rainfall, distance to fault, distance to roads, distance to rivers, along with a map of the distribution of occurred landslides were prepared in GIS environment. Then, fuzzy logic was used for weighting sub-criteria, and the ANP was applied to weight the criteria. Next, they were integrated based on GIS spatial analysis methods and the landslide risk map was produced. Evaluating the results of this study by using receiver operating characteristic curves shows that the hybrid model designed by areas under the curve 0.815 has good accuracy. Also, according to the prepared map, a total of 23.22% of the area, amounting to 105.38 km2, is in the high and very high-risk class. Results of this research are great of importance for regional planning tasks and the landslide prediction map can be used for spatial planning tasks and for the mitigation of future hazards in the study area.
Guan, Yajing; Wang, Jianchen; Tian, Yixin; Hu, Weimin; Zhu, Liwei; Zhu, Shuijin; Hu, Jin
2013-01-01
Seed security is of prime importance for agriculture. To protect true seeds from being faked, more secure dual anti-counterfeiting technologies for tobacco (Nicotiana tabacum L.) pelleted seed were developed in this paper. Fluorescein (FR), rhodamine B (RB), and magnetic powder (MP) were used as anti-counterfeiting labels. According to their different properties and the special seed pelleting process, four dual-labeling treatments were conducted for two tobacco varieties, MS Yunyan85 (MSYY85) and Honghua Dajinyuan (HHDJY). Then the seed germination and seedling growth status were investigated, and the fluorescence in cracked pellets and developing seedlings was observed under different excitation lights. The results showed that FR, RB, and MP had no negative effects on the germination, seedling growth, and MDA content of the pelleted seeds, and even some treatments significantly enhanced seedling dry weight, vigor index, and shoot height in MS YY85, and increased SOD activity and chlorophyll content in HHDJY as compared to the control. In addition, the cotyledon tip of seedlings treated with FR and MP together represented bright green fluorescence under illumination of blue light (478 nm). And the seedling cotyledon vein treated with RB and MP together showed red fluorescence under green light (546 nm). All seeds pelleted with magnetic powder of proper concentration could be attracted by a magnet. Thus, it indicated that those new dual-labeling methods that fluorescent compound and magnetic powder simultaneously applied in the same seed pellets definitely improved anti-counterfeiting technology and enhanced the seed security. This technology will ensure that high quality seed will be used in the crop production.
Directory of Open Access Journals (Sweden)
Yajing Guan
Full Text Available Seed security is of prime importance for agriculture. To protect true seeds from being faked, more secure dual anti-counterfeiting technologies for tobacco (Nicotiana tabacum L. pelleted seed were developed in this paper. Fluorescein (FR, rhodamine B (RB, and magnetic powder (MP were used as anti-counterfeiting labels. According to their different properties and the special seed pelleting process, four dual-labeling treatments were conducted for two tobacco varieties, MS Yunyan85 (MSYY85 and Honghua Dajinyuan (HHDJY. Then the seed germination and seedling growth status were investigated, and the fluorescence in cracked pellets and developing seedlings was observed under different excitation lights. The results showed that FR, RB, and MP had no negative effects on the germination, seedling growth, and MDA content of the pelleted seeds, and even some treatments significantly enhanced seedling dry weight, vigor index, and shoot height in MS YY85, and increased SOD activity and chlorophyll content in HHDJY as compared to the control. In addition, the cotyledon tip of seedlings treated with FR and MP together represented bright green fluorescence under illumination of blue light (478 nm. And the seedling cotyledon vein treated with RB and MP together showed red fluorescence under green light (546 nm. All seeds pelleted with magnetic powder of proper concentration could be attracted by a magnet. Thus, it indicated that those new dual-labeling methods that fluorescent compound and magnetic powder simultaneously applied in the same seed pellets definitely improved anti-counterfeiting technology and enhanced the seed security. This technology will ensure that high quality seed will be used in the crop production.
Directory of Open Access Journals (Sweden)
J.-M. Vouillamoz
2012-11-01
Full Text Available Many human communities living in coastal areas in Africa and Asia rely on thin freshwater lenses for their domestic supply. Population growth together with change in rainfall patterns and sea level will probably impact these vulnerable groundwater resources. Spatial knowledge of the aquifer properties and creation of a groundwater model are required for achieving a sustainable management of the resource. This paper presents a ready-to-use methodology for estimating the key aquifer properties and the freshwater resource based on the joint use of two non-invasive geophysical tools together with common hydrological measurements.
We applied the proposed methodology in an unconfined aquifer of a coastal sandy barrier in South-Western India. We jointly used magnetic resonance and transient electromagnetic soundings and we monitored rainfall, groundwater level and groundwater electrical conductivity. The combined interpretation of geophysical and hydrological results allowed estimating the aquifer properties and mapping the freshwater lens. Depending on the location and season, we estimate the freshwater reserve to range between 400 and 700 L m^{−2} of surface area (± 50%. We also estimate the recharge using time lapse geophysical measurements with hydrological monitoring. After a rainy event close to 100% of the rain is reaching the water table, but the net recharge at the end of the monsoon is less than 10% of the rain. Thus, we conclude that a change in rainfall patterns will probably not impact the groundwater resource since most of the rain water recharging the aquifer is flowing towards the sea and the river. However, a change in sea level will impact both the groundwater reserve and net recharge.
Mee, Jonathan A; Bernatchez, Louis; Reist, Jim D; Rogers, Sean M; Taylor, Eric B
2015-06-01
The concept of the designatable unit (DU) affords a practical approach to identifying diversity below the species level for conservation prioritization. However, its suitability for defining conservation units in ecologically diverse, geographically widespread and taxonomically challenging species complexes has not been broadly evaluated. The lake whitefish species complex (Coregonus spp.) is geographically widespread in the Northern Hemisphere, and it contains a great deal of variability in ecology and evolutionary legacy within and among populations, as well as a great deal of taxonomic ambiguity. Here, we employ a set of hierarchical criteria to identify DUs within the Canadian distribution of the lake whitefish species complex. We identified 36 DUs based on (i) reproductive isolation, (ii) phylogeographic groupings, (iii) local adaptation and (iv) biogeographic regions. The identification of DUs is required for clear discussion regarding the conservation prioritization of lake whitefish populations. We suggest conservation priorities among lake whitefish DUs based on biological consequences of extinction, risk of extinction and distinctiveness. Our results exemplify the need for extensive genetic and biogeographic analyses for any species with broad geographic distributions and the need for detailed evaluation of evolutionary history and adaptive ecological divergence when defining intraspecific conservation units.
Houston, Eric; Tatum, Alexander K; Guy, Arryn; Mikrut, Cassandra; Yoder, Wren
2015-10-26
Poor treatment adherence is a major problem among individuals with chronic illness. Research indicates that adherence is worsened when accompanied by depressive symptoms. In this preliminary study, we aimed to describe how a patient-centered approach could be employed to aid patients with depressive symptoms in following their treatment regimens. The sample consisted of 14 patients undergoing antiretroviral therapy (ART) for HIV who reported clinically-significant depressive symptoms. Participant ratings of 23 treatment-related statements were examined using two assessment and analytic techniques. Interviews were conducted with participants to determine their views of information based on the technique. Results indicate that while participants with optimal adherence focused on views of treatment associated with side effects to a greater extent than participants with poor adherence, they tended to relate these side effects to sources of intrinsic motivation. The study provides examples of how practitioners could employ the assessment techniques outlined to better understand how patients think about treatment and aid them in effectively framing their health-related goals.
International Nuclear Information System (INIS)
Tarancon Moran, Miguel Angel; Albinana, Fernando Callejas; Del Rio, Pablo
2008-01-01
This paper analyses the factors leading to CO 2 emissions in the Spanish electricity generation sector in order to propose effective mitigation policies aimed at tackling those emissions. Traditionally, two broad categories of those factors have been considered in the literature: those related to the supply of electricity (technological features of the sector) and those related to the level of economic activity (demand factors). This paper focuses on an additional element, which has usually been neglected, the structural factor, which refers to the set of intersectoral transactions (related to the technologies used in other productive sectors) which connect, in either a direct or an indirect way, the general economic activity with the supply of electricity and, thus, with the emissions of the electricity generation sector. This analysis allows us to identify the so-called 'sectors structurally responsible for emissions' (SSER), whose production functions involve transactions which connect the demand for goods and services with the emissions of the electricity generation sector. The methodology is based on an input-output approach and a sensitivity analysis. The paper shows that there are structural rigidities, deeply ingrained within the economic system, which lead to emissions from the electricity generation sector for which this sector cannot be held responsible. These rigidities limit the effectiveness of policies aimed at emissions mitigation in this sector. (author)
Directory of Open Access Journals (Sweden)
Hamid Balali
2015-09-01
Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.
Ruiz-Cordell, Karyn D; Joubin, Kathy; Haimowitz, Steven
2016-01-01
The goal of this study was to add a predictive modeling approach to the meta-analysis of continuing medical education curricula to determine whether this technique can be used to better understand clinical decision making. Using the education of rheumatologists on rheumatoid arthritis management as a model, this study demonstrates how the combined methodology has the ability to not only characterize learning gaps but also identify those proficiency areas that have the greatest impact on clinical behavior. The meta-analysis included seven curricula with 25 activities. Learners who identified as rheumatologists were evaluated across multiple learning domains, using a uniform methodology to characterize learning gains and gaps. A performance composite variable (called the treatment individualization and optimization score) was then established as a target upon which predictive analytics were conducted. Significant predictors of the target included items related to the knowledge of rheumatologists and confidence concerning 1) treatment guidelines and 2) tests that measure disease activity. In addition, a striking demographic predictor related to geographic practice setting was also identified. The results demonstrate the power of advanced analytics to identify key predictors that influence clinical behaviors. Furthermore, the ability to provide an expected magnitude of change if these predictors are addressed has the potential to substantially refine educational priorities to those drivers that, if targeted, will most effectively overcome clinical barriers and lead to the greatest success in achieving treatment goals.
Directory of Open Access Journals (Sweden)
M. Khalilzadeh
2016-12-01
Full Text Available In this paper, a stochastic approach is proposed for reliability assessment of bidirectional DC-DC converters, including the fault-tolerant ones. This type of converters can be used in a smart DC grid, feeding DC loads such as home appliances and plug-in hybrid electric vehicles (PHEVs. The reliability of bidirectional DC-DC converters is of such an importance, due to the key role of the expected increasingly utilization of DC grids in modern Smart Grid. Markov processes are suggested for reliability modeling and consequently calculating the expected effective lifetime of bidirectional converters. A three-leg bidirectional interleaved converter using data of Toyota Prius 2012 hybrid electric vehicle is used as a case study. Besides, the influence of environment and ambient temperature on converter lifetime is studied. The impact of modeling the reliability of the converter and adding reliability constraints on the technical design procedure of the converter is also investigated. In order to investigate the effect of leg increase on the lifetime of the converter, single leg to five-leg interleave DC-DC converters are studied considering economical aspect and the results are extrapolated for six and seven-leg converters. The proposed method could be generalized so that the number of legs and input and output capacitors could be an arbitrary number.
Directory of Open Access Journals (Sweden)
N.S. Khalifa
2013-12-01
Full Text Available In light of using laser power in space applications, the motivation of this paper is to use a space based solar pumped laser to produce a torque on LEO satellites of various shapes. It is assumed that there is a space station that fires laser beam toward the satellite so the beam spreading due to diffraction is considered to be the dominant effect on the laser beam propagation. The laser torque is calculated at the point of closest approach between the space station and some sun synchronous low Earth orbit cubesats. The numerical application shows that space based laser torque has a significant contribution on the LEO cubesats. It has a maximum value in the order of 10−8 Nm which is comparable with the residual magnetic moment. However, it has a minimum value in the order 10−11 Nm which is comparable with the aerodynamic and gravity gradient torque. Consequently, space based laser torque can be used as an active attitude control system.
Valotto, Gabrio; Rampazzo, Giancarlo; Visin, Flavia; Gonella, Francesco; Cattaruzza, Elti; Glisenti, Antonella; Formenton, Gianni; Tieppo, Paulo
2015-12-01
Road dust is a non-exhaust source of atmospheric particulate by re-suspension. It is composed of particles originating from natural sources as well as other non-exhaust source such as tire, brake and asphalt wear. The discrimination between atmospheric particles directly emitted from abrasion process and those related to re-suspension is therefore an open issue, as far as the percentage contribution of non-exhaust emissions is becoming more considerable due also to the recent policy actions and the technological upgrades in the automotive field, focused on the reduction of exhaust emissions. In this paper, road dust collected along the bridge that connects Venice (Italy) to the mainland is characterized with a multi-technique approach in order to determine its composition depending on environmental as well as traffic-related conditions. Six pollutant sources of road dust particles were identified by cluster analysis: brake, railway, tire, asphalt, soil + marine, and mixed combustions. Considering the lack of information on this matrix in this area, this study is intended to provide useful information for future identification of road dust re-suspension source in atmospheric particulate.
Radionuclides distribution coefficient of soil to soil-solution
International Nuclear Information System (INIS)
1990-06-01
The present book addresses various issues related with the coefficient of radionuclides distribution between soil and soil solution. It consists of six sections and two appendices. The second section, following an introductory one, describes the definition of the coefficient and a procedures of its calculation. The third section deals with the application of the distribution coefficient to the prediction of movements of radionuclides through soil. Various methods for measuring the coefficient are described in the fourth section. The next section discusses a variety of factors (physical and chemical) that can affect the distribution coefficient. Measurements of the coefficient for different types of oils are listed in the sixth section. An appendix is attached to the book to show various models that can be helpful in applying the coefficient of distribution of radionuclides moving from soil into agricultural plants. (N.K.)
Olson, Michael J; Faria, Ellen C; Hayes, Eileen P; Jolly, Robert A; Barle, Ester Lovsin; Molnar, Lance R; Naumann, Bruce D; Pecquet, Alison M; Shipp, Bryan K; Sussman, Robert G; Weideman, Patricia A
2016-08-01
This manuscript centers on communication with key stakeholders of the concepts and program goals involved in the application of health-based pharmaceutical cleaning limits. Implementation of health-based cleaning limits, as distinct from other standards such as 1/1000th of the lowest clinical dose, is a concept recently introduced into regulatory domains. While there is a great deal of technical detail in the written framework underpinning the use of Acceptable Daily Exposures (ADEs) in cleaning (for example ISPE, 2010; Sargent et al., 2013), little is available to explain how to practically create a program which meets regulatory needs while also fulfilling good manufacturing practice (GMP) and other expectations. The lack of a harmonized approach for program implementation and communication across stakeholders can ultimately foster inappropriate application of these concepts. Thus, this period in time (2014-2017) could be considered transitional with respect to influencing best practice related to establishing health-based cleaning limits. Suggestions offered in this manuscript are intended to encourage full and accurate communication regarding both scientific and administrative elements of health-based ADE values used in pharmaceutical cleaning practice. This is a large and complex effort that requires: 1) clearly explaining key terms and definitions, 2) identification of stakeholders, 3) assessment of stakeholders' subject matter knowledge, 4) formulation of key messages fit to stakeholder needs, 5) identification of effective and timely means for communication, and 6) allocation of time, energy, and motivation for initiating and carrying through with communications. Copyright © 2016 Elsevier Inc. All rights reserved.
Jazebizadeh, Hooman; Tabeshian, Maryam; Taheran Vernoosfaderani, Mahsa
2010-11-01
Although more than half a century is passed since space technology was first developed, developing countries are just beginning to enter the arena, focusing mainly on educating professionals. Space technology by itself is an interdisciplinary science, is costly, and developing at a fast pace. Moreover, a fruitful education system needs to remain dynamic if the quality of education is the main concern, making it a complicated system. This paper makes use of the System Engineering Approach and the experiences of developed countries in this area while incorporating the needs of the developing countries to devise a comprehensive program in space engineering at the Master's level. The needs of the developing countries as regards space technology education may broadly be put into two categories: to raise their knowledge of space technology which requires hard work and teamwork skills, and to transfer and domesticate space technology while minimizing the costs and maximizing its effectiveness. The requirements of such space education system, which include research facilities, courses, and student projects are then defined using a model drawn from the space education systems in universities in North America and Europe that has been modified to include the above-mentioned needs. Three design concepts have been considered and synthesized through functional analysis. The first one is Modular and Detail Study which helps students specialize in a particular area in space technology. Second is referred to as Integrated and Interdisciplinary Study which focuses on understanding and development of space systems. Finally, the third concept which has been chosen for the purpose of this study, is a combination of the other two, categorizing the required curriculum into seven modules, setting aside space applications. This helps students to not only specialize in one of these modules but also to get hands-on experience in a real space project through participation in summer group
A Total-Evidence Approach to Dating with Fossils, Applied to the Early Radiation of the Hymenoptera
Ronquist, Fredrik; Klopfstein, Seraina; Vilhelmsen, Lars; Schulmeister, Susanne; Murray, Debra L.; Rasnitsyn, Alexandr P.
2012-01-01
Abstract Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.] PMID:22723471
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
de Juan, Silvia; Gelcich, Stefan; Ospina-Alvarez, Andres; Perez-Matus, Alejandro; Fernandez, Miriam
2015-11-15
Ecosystem-based management implies understanding feedbacks between ecosystems and society. Such understanding can be approached with the Drivers-Pressures-State change-Impacts-Response framework (DPSIR), incorporating stakeholders' preferences for ecosystem services to assess impacts on society. This framework was adapted to six locations in the central coast of Chile, where artisanal fisheries coexist with an increasing influx of tourists, and a set of fisheries management areas alternate with open access areas and a no-take Marine Protected Area (MPA). The ecosystem services in the study area were quantified using biomass and species richness in intertidal and subtidal areas as biological indicators. The demand for ecosystem services was elicited by interviews to the principal groups of users. Our results evidenced decreasing landings and a negative perception of fishermen on temporal trends of catches. The occurrence of recreational fishing was negligible, although the consumption of seafood by tourists was relatively high. Nevertheless, the consumption of organisms associated to the study system was low, which could be linked, amongst other factors, to decreasing catches. The comparison of biological indicators between management regimens provided variable results, but a positive effect of management areas and the MPA on some of the metrics was observed. The prioritising of ecosystem attributes by tourists was highly homogenous across the six locations, with "scenic beauty" consistently selected as the preferred attribute, followed by "diversity". The DPSIR framework illustrated the complex interactions existing in these locations, with weak linkages between society's priorities, existing management objectives and the state of biological communities. Overall, this work improved our knowledge on relations between components of coastal areas in central Chile, of paramount importance to advance towards an ecosystem-based management in the area. Copyright © 2015
Quadrature formulas for Fourier coefficients
Bojanov, Borislav
2009-09-01
We consider quadrature formulas of high degree of precision for the computation of the Fourier coefficients in expansions of functions with respect to a system of orthogonal polynomials. In particular, we show the uniqueness of a multiple node formula for the Fourier-Tchebycheff coefficients given by Micchelli and Sharma and construct new Gaussian formulas for the Fourier coefficients of a function, based on the values of the function and its derivatives. © 2009 Elsevier B.V. All rights reserved.
Measuring of heat transfer coefficient
DEFF Research Database (Denmark)
Henningsen, Poul; Lindegren, Maria
Subtask 3.4 Measuring of heat transfer coefficient Subtask 3.4.1 Design and setting up of tests to measure heat transfer coefficient Objective: Complementary testing methods together with the relevant experimental equipment are to be designed by the two partners involved in order to measure...... the heat transfer coefficient for a wide range of interface conditions in hot and warm forging processes. Subtask 3.4.2 Measurement of heat transfer coefficient The objective of subtask 3.4.2 is to determine heat transfer values for different interface conditions reflecting those typically operating in hot...
Directory of Open Access Journals (Sweden)
Benjamin Sauer
2014-09-01
the onset of primary breakup. The qualitative characterization of the breakup for OP 1 and OP 2, yield to the distinction of the stretched ligament breakup for the former, and the torn-sheet breakup for the latter O P. The breakup time for OP 1 is longer than for OP 2. This study proves the applicability of the eDNS concept for investigating breakup processes as the transient nature of the phase interface behavior can be captured. The approach offers the potential of simulating realistic annular highly-swirled airblast atomizer geometries under realistic conditions.
Olguin, Marcela; Wayson, Craig; Fellows, Max; Birdsey, Richard; Smyth, Carolyn E.; Magnan, Michael; Dugan, Alexa J.; Mascorro, Vanessa S.; Alanís, Armando; Serrano, Enrique; Kurz, Werner A.
2018-03-01
The Paris Agreement of the United Nation Framework Convention on Climate Change calls for a balance of anthropogenic greenhouse emissions and removals in the latter part of this century. Mexico indicated in its Intended Nationally Determined Contribution and its Climate Change Mid-Century Strategy that the land sector will contribute to meeting GHG emission reduction goals. Since 2012, the Mexican government through its National Forestry Commission, with international financial and technical support, has been developing carbon dynamics models to explore climate change mitigation options in the forest sector. Following a systems approach, here we assess the biophysical mitigation potential of forest ecosystems, harvested wood products and their substitution benefits (i.e. the change in emissions resulting from substitution of wood for more emissions-intensive products and fossil fuels), for policy alternatives considered by the Mexican government, such as a net zero deforestation rate and sustainable forest management. We used available analytical frameworks (Carbon Budget Model of the Canadian Forest Sector and a harvested wood products model), parameterized with local input data in two contrasting Mexican states. Using information from the National Forest Monitoring System (e.g. forest inventories, remote sensing, disturbance data), we demonstrate that activities aimed at reaching a net-zero deforestation rate can yield significant CO2e mitigation benefits by 2030 and 2050 relative to a baseline scenario (‘business as usual’), but if combined with increasing forest harvest to produce long-lived products and substitute more energy-intensive materials, emissions reductions could also provide other co-benefits (e.g. jobs, illegal logging reduction). We concluded that the relative impact of mitigation activities is locally dependent, suggesting that mitigation strategies should be designed and implemented at sub-national scales. We were also encouraged about the
Lee, Kil Yong; Burnett, William C
A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 °C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H 2 O and BigBottle RAD-H 2 O. The results have shown good agreement between this method and the standard methods.
Determination of air-loop volume and radon partition coefficient for measuring radon in water sample
International Nuclear Information System (INIS)
Kil Yong Lee; Burnett, W.C.
2013-01-01
A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 deg C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H 2 O and BigBottle RAD-H 2 O. The results have shown good agreement between this method and the standard methods. (author)
Lattice cell diffusion coefficients. Definitions and comparisons
International Nuclear Information System (INIS)
Hughes, R.P.
1980-01-01
Definitions of equivalent diffusion coefficients for regular lattices of heterogeneous cells have been given by several authors. The paper begins by reviewing these different definitions and the unification of their derivation. This unification makes clear how accurately each definition (together with appropriate cross-section definitions to preserve the eigenvalue) represents the individual reaction rates within the cell. The approach can be extended to include asymmetric cells and whereas before, the buckling describing the macroscopic flux shape was real, here it is found to be complex. A neutron ''drift'' coefficient as well as a diffusion coefficient is necessary to produce the macroscopic flux shape. The numerical calculation of the various different diffusion coefficients requires the solutions of equations similar to the ordinary transport equation for an infinite lattice. Traditional reactor physics codes are not sufficiently flexible to solve these equations in general. However, calculations in certain simple cases are presented and the theoretical results quantified. In difficult geometries, Monte Carlo techniques can be used to calculate an effective diffusion coefficient. These methods relate to those already described provided that correlation effects between different generations of neutrons are included. Again, these effects are quantified in certain simple cases. (author)
The Determinants of Gini Coefficient in Iran Based on Bayesian Model Averaging
Directory of Open Access Journals (Sweden)
Mohsen Mehrara
2015-03-01
Full Text Available This paper has tried to apply BMA approach in order to investigate important influential variables on Gini coefficient in Iran over the period 1976-2010. The results indicate that the GDP growth is the most important variable affecting the Gini coefficient and has a positive influence on it. Also the second and third effective variables on Gini coefficient are respectively the ratio of government current expenditure to GDP and the ratio of oil revenue to GDP which lead to an increase in inequality. This result is corresponding with rentier state theory in Iran economy. Injection of massive oil revenue to Iran's economy and its high share of the state budget leads to inefficient government spending and an increase in rent-seeking activities in the country. Economic growth is possibly a result of oil revenue in Iran economy which has caused inequality in distribution of income.
Mayer coefficients in two-dimensional Coulomb systems
International Nuclear Information System (INIS)
Speer, E.R.
1986-01-01
It is shown that, for neutral systems of particles of arbitrary charges in two dimensions, with hard cores, coefficients of the Mayer series for the pressure exist in the thermodynamic limit below certain thresholds in the temperature. The methods used here apply also to correlation functions and yield bounds on the asymptotic behavior of their Mayer coefficients
Scaling the Raman gain coefficient: Applications to Germanosilicate fibers
DEFF Research Database (Denmark)
Rottwitt, Karsten; Bromage, J.; Stentz, A.J.
2003-01-01
This paper presents a comprehensive analysis of the temperature dependence of a Raman amplifier and the scaling of the Raman gain coefficient with wavelength, modal overlap, and material composition. The temperature dependence is derived by applying a quantum theoretical description, whereas...... the scaling of the Raman gain coefficient is derived using a classical electromagnetic model. We also present experimental verification of our theoretical findings....
Directory of Open Access Journals (Sweden)
Hossein Ghamari-Givi
2012-10-01
Full Text Available Objective: The aim of this study was to consider the effectiveness of Applied Behavioral Analysis (ABA therapy and Treatment and Education of Autistic and related Communication handicapped children (TEACCH on stereotyped behavior, interactional and communicational problems in the autistic Children. Materials & Methods: Subjects of this experimental study were all of children in Tabriz autism school in second half of year 1388. Sample size was 29 children (21 boys and 8 girls in age range of 6-14 who were selected using random sampling method and were placed in Applied Behavioral Analysis group (8 boys and 2 girls, Treatment- Education group (9 boys and 1 girl, and control group (4 boys and 5 girls. The two scales applied for the study were Modified Checklist for Autism in Toddlers and Gilliam Autism Rating Scale. The data were analyzed using analysis of covariance. Results: The results of the research showed that the means of behavioral problem indicators in both ABA and TEACCH methods were reduced significantly in comparison with control group (P<0.01. Also in comparison of ABA therapy and TEACCH method, decline in the mean scores of communication problems was significant and in favour of ABA therapy (P<0.05. Conclusion: According to the results of study, although both ABA therapy and TEACCH method were effective in reducing symptoms of behavioral problems but because of being more effective, Applied Behavioral Analysis is suggested as a selective therapeutic approach for this research.
Correlation Coefficients: Appropriate Use and Interpretation.
Schober, Patrick; Boer, Christa; Schwarte, Lothar A
2018-05-01
Correlation in the broadest sense is a measure of an association between variables. In correlated data, the change in the magnitude of 1 variable is associated with a change in the magnitude of another variable, either in the same (positive correlation) or in the opposite (negative correlation) direction. Most often, the term correlation is used in the context of a linear relationship between 2 continuous variables and expressed as Pearson product-moment correlation. The Pearson correlation coefficient is typically used for jointly normally distributed data (data that follow a bivariate normal distribution). For nonnormally distributed continuous data, for ordinal data, or for data with relevant outliers, a Spearman rank correlation can be used as a measure of a monotonic association. Both correlation coefficients are scaled such that they range from -1 to +1, where 0 indicates that there is no linear or monotonic association, and the relationship gets stronger and ultimately approaches a straight line (Pearson correlation) or a constantly increasing or decreasing curve (Spearman correlation) as the coefficient approaches an absolute value of 1. Hypothesis tests and confidence intervals can be used to address the statistical significance of the results and to estimate the strength of the relationship in the population from which the data were sampled. The aim of this tutorial is to guide researchers and clinicians in the appropriate use and interpretation of correlation coefficients.
Object detection by correlation coefficients using azimuthally averaged reference projections.
Nicholson, William V
2004-11-01
A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.
Transport coefficients of strongly interacting matter
International Nuclear Information System (INIS)
Heckmann, Klaus
2011-01-01
In this thesis, we investigate the dissipative transport phenomena of strongly interacting matter. The special interest is in the shear viscosity and its value divided by entropy density. The performed calculations are based on effective models for Quantum Chromodynamics, mostly focused on the 2-flavor Nambu-Jona-Lasinio model. This allows us to study the hadronic sector as well as the quark sector within one single model. We expand the models up to next-to-leading order in inverse numbers of colors. We present different possibilities of calculating linear transport coefficients and give an overview over qualitative properties as well as over recent ideas concerning ideal fluids. As present methods are not able to calculate the quark two-point function in Minkowski space-time in the self-consistent approximation scheme of the Nambu-Jona-Lasinio model, a new method for this purpose is developed. This self-energy parametrization method is applied to the expansion scheme, yielding the quark spectral function with meson back-coupling effects. The usage of this spectral function in the transport calculation is only one result of this work. We also test the application of different transport approaches in the NJL model, and find an interesting behavior of the shear viscosity at the critical end point of the phase diagram. We also use the NJL model to calculate the viscosity of a pion gas in the dilute regime. After an analysis of other models for pions and their interaction, we find that the NJL-result leads to an important modification of transport properties in comparison with the calculations which purely rely on pion properties in the vacuum. (orig.)
Directory of Open Access Journals (Sweden)
Seppo Väyrynen
2006-01-01
Full Text Available A research and development (R&D approach has been applied to video telephony (VT in northern Finland since 1994 by broad consortia. The focus has been on the considerable involvement of ergonomics within the engineering and implementation of VT. This multidisciplinary participatory ergonomic R&D approach (PERDA is described briefly, in general and through two cases. The user-centeredness should be discernible in this sociotechnical systemic entity. A consortium—comprising mainly manufacturers, individual and organizational users of technological products, and R&D organizations—serves as a natural context for product development. VT has been considered to have much potential for enhancing (multimedia interaction and effective multimodal communication, thereby facilitating many activities of everyday life and work. An assessment of the VT system, called HomeHelper, involved older citizens, as clients or customers, and the staff of social, health, and other services.
Coefficient of restitution of model repaired car body parts
D. Hadryś; M. Miros
2008-01-01
Purpose: The qualification of influence of model repaired car body parts on the value of coefficient of restitution and evaluation of impact energy absorption of model repaired car body parts.Design/methodology/approach: Investigation of plastic strain and coefficient of restitution of new and repaired model car body parts with using impact test machine for different impact energy.Findings: The results of investigations show that the value of coefficient of restitution changes with speed (ene...
Boot, Walter R; Sumner, Anna; Towne, Tyler J; Rodriguez, Paola; Anders Ericsson, K
2017-04-01
Video games are ideal platforms for the study of skill acquisition for a variety of reasons. However, our understanding of the development of skill and the cognitive representations that support skilled performance can be limited by a focus on game scores. We present an alternative approach to the study of skill acquisition in video games based on the tools of the Expert Performance Approach. Our investigation was motivated by a detailed analysis of the behaviors responsible for the superior performance of one of the highest scoring players of the video game Space Fortress (Towne, Boot, & Ericsson, ). This analysis revealed how certain behaviors contributed to his exceptional performance. In this study, we recruited a participant for a similar training regimen, but we collected concurrent and retrospective verbal protocol data throughout training. Protocol analysis revealed insights into strategies, errors, mental representations, and shifting game priorities. We argue that these insights into the developing representations that guided skilled performance could only easily have been derived from the tools of the Expert Performance Approach. We propose that the described approach could be applied to understand performance and skill acquisition in many different video games (and other short- to medium-term skill acquisition paradigms) and help reveal mechanisms of transfer from gameplay to other measures of laboratory and real-world performance. Copyright © 2016 Cognitive Science Society, Inc.
International Nuclear Information System (INIS)
Wu, Y.T.; Gureghian, A.B.; Sagar, B.; Codell, R.B.
1992-12-01
The Limit State approach is based on partitioning the parameter space into two parts: one in which the performance measure is smaller than a chosen value (called the limit state), and the other in which it is larger. Through a Taylor expansion at a suitable point, the partitioning surface (called the limit state surface) is approximated as either a linear or quadratic function. The success and efficiency of the limit state method depends upon choosing an optimum point for the Taylor expansion. The point in the parameter space that has the highest probability of producing the value chosen as the limit state is optimal for expansion. When the parameter space is transformed into a standard Gaussian space, the optimal expansion point, known as the lost Probable Point (MPP), has the property that its location on the Limit State surface is closest to the origin. Additionally, the projections onto the parameter axes of the vector from the origin to the MPP are the sensitivity coefficients. Once the MPP is determined and the Limit State surface approximated, formulas (see Equations 4-7 and 4-8) are available for determining the probability of the performance measure being less than the limit state. By choosing a succession of limit states, the entire cumulative distribution of the performance measure can be detemined. Methods for determining the MPP and also for improving the estimate of the probability are discussed in this report
Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.
2014-01-01
Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.
Teruel, Jose R; Goa, Pål E; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F
2016-11-01
Purpose To evaluate the relative change of the apparent diffusion coefficient (ADC) at low- and medium-b-value regimens as a surrogate marker of microcirculation, to study its correlation with dynamic contrast agent-enhanced (DCE) magnetic resonance (MR) imaging-derived parameters, and to assess its potential for differentiation between malignant and benign breast tumors. Materials and Methods Ethics approval and informed consent were obtained. From May 2013 to June 2015, 61 patients diagnosed with either malignant or benign breast tumors were prospectively recruited. All patients were scanned with a 3-T MR imager, including diffusion-weighted imaging (DWI) and DCE MR imaging. Parametric analysis of DWI and DCE MR imaging was performed, including a proposed marker, relative enhanced diffusivity (RED). Spearman correlation was calculated between DCE MR imaging and DWI parameters, and the potential of the different DWI-derived parameters for differentiation between malignant and benign breast tumors was analyzed by dividing the sample into equally sized training and test sets. Optimal cut-off values were determined with receiver operating characteristic curve analysis in the training set, which were then used to evaluate the independent test set. Results RED had a Spearman rank correlation of 0.61 with the initial area under the curve calculated from DCE MR imaging. Furthermore, RED differentiated cancers from benign tumors with an overall accuracy of 90% (27 of 30) on the test set with 88.2% (15 of 17) sensitivity and 92.3% (12 of 13) specificity. Conclusion This study presents promising results introducing a simplified approach to assess results from a DWI protocol sensitive to the intravoxel incoherent motion effect by using only three b values. This approach could potentially aid in the differentiation, characterization, and monitoring of breast pathologies. © RSNA, 2016 Online supplemental material is available for this article.
DEFF Research Database (Denmark)
Liu, Yuanrong; Chen, Weimin; Zhong, Jing
2017-01-01
The previously developed numerical inverse method was applied to determine the composition-dependent interdiffusion coefficients in single-phase finite diffusion couples. The numerical inverse method was first validated in a fictitious binary finite diffusion couple by pre-assuming four standard...... sets of interdiffusion coefficients. After that, the numerical inverse method was then adopted in a ternary Al-Cu-Ni finite diffusion couple. Based on the measured composition profiles, the ternary interdiffusion coefficients along the entire diffusion path of the target ternary diffusion couple were...... obtained by using the numerical inverse approach. The comprehensive comparisons between the computations and the experiments indicate that the numerical inverse method is also applicable to high-throughput determination of the composition-dependent interdiffusion coefficients in finite diffusion couples....
Sabine absorption coefficients to random incidence absorption coefficients
DEFF Research Database (Denmark)
Jeong, Cheol-Ho
2014-01-01
into random incidence absorption coefficients for porous absorbers are investigated. Two optimization-based conversion methods are suggested: the surface impedance estimation for locally reacting absorbers and the flow resistivity estimation for extendedly reacting absorbers. The suggested conversion methods...
Security planning an applied approach
Lincke, Susan
2015-01-01
This book guides readers through building an IT security plan. Offering a template, it helps readers to prioritize risks, conform to regulation, plan their defense and secure proprietary/confidential information. The process is documented in the supplemental online security workbook. Security Planning is designed for the busy IT practitioner, who does not have time to become a security expert, but needs a security plan now. It also serves to educate the reader of a broader set of concepts related to the security environment through the Introductory Concepts and Advanced sections. The book serv
On finding algebraic expressions for genealogical coefficients
International Nuclear Information System (INIS)
Kanyauskas, J.M.; Shimonis, V.Ch.; Rudzikas, Z.B.
1979-01-01
It has been attempted to obtain analytical expressions for genealogical coefficients with one detached electron in the case of L-S coupling. A method of second quantization and tensorial properties of the quasi-spin operator are applied. It is restricted to the states for the classification of which the seigniority quantum number v is sufficient. Three ways of the acquirement of these expressions are discussed: 1. In the recurrent way wave functions of N and N-1 electrons are built, consequently expressing these functions in terms of the creation-annihilation operators. 2. Recurrent summation with the use of evident, simple genealogical coefficients. 3. Using the ratios, connecting the genealogical coefficients with the normalized multiplier. The data are presented in formulae and discussions. The generalization of the Redmond's formula is obtained and relatively simple algebraic expressions of the genealogical coefficients of the equivalent electron configurations, for the distinction of the recurrent terms of which introduction of the seigniority quantum number v is sufficient, are given
Graphical Solution of the Monic Quadratic Equation with Complex Coefficients
Laine, A. D.
2015-01-01
There are many geometrical approaches to the solution of the quadratic equation with real coefficients. In this article it is shown that the monic quadratic equation with complex coefficients can also be solved graphically, by the intersection of two hyperbolas; one hyperbola being derived from the real part of the quadratic equation and one from…
Probabilistic optimization of safety coefficients
International Nuclear Information System (INIS)
Marques, M.; Devictor, N.; Magistris, F. de
1999-01-01
This article describes a reliability-based method for the optimization of safety coefficients defined and used in design codes. The purpose of the optimization is to determine the partial safety coefficients which minimize an objective function for sets of components and loading situations covered by a design rule. This objective function is a sum of distances between the reliability of the components designed using the safety coefficients and a target reliability. The advantage of this method is shown on the examples of the reactor vessel, a vapour pipe and the safety injection circuit. (authors)
Directory of Open Access Journals (Sweden)
Beatrice Scholtes
2015-12-01
Full Text Available Aim: Risk factors for child injury are multi-faceted. Social, environmental and economic factors place responsibility for prevention upon many stakeholders across traditional sectors such as health, justice, environment and education. Multi-sectoral collaboration for injury prevention is thus essential. In addition, co-benefits due to injury prevention initiatives exist. However, multi-sectoral collaboration is often difficult to establish and maintain. We present an applied approach for practitioners and policy makers at the local level to use to explore and address the multi-sectoral nature of child injury. Methods: We combined elements of the Haddon Matrix and the Lens and Telescope model, to develop a new approach for practitioners and policy makers at the local level. Results: The approach offers the opportunity for diverse sectors at the local level to work together to identify their role in child injury prevention. Based on ecological injury prevention and life-course epidemiology it encourages multi-disciplinary team building from the outset. The process has three phases: first, visualising the multi-sectoral responsibilities for child injury prevention in the local area; second, demonstrating the need for multi-sectoral collaboration and helping plan prevention activities together; and third, visualising potential co-benefits to other sectors and age groups that may arise from child injury prevention initiatives. Conclusion: The approach and process encourages inter-sectoral collaboration for child injury prevention at the local level. It is a useful addition for child injury prevention at the local level, however testing the practicality of the approach in a real-world setting, and refinement of the process would improve it further.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith
Widowati, A.; Anjarsari, P.; Zuhdan, K. P.; Dita, A.
2018-03-01
The challenges of the 21st century require innovative solutions. Education must able to make an understanding of science learning that leads to the formation of scientific literacy learners. This research was conducted to produce the prototype as science worksheet based on Nature of Science (NoS) within inquiry approach and to know the effectiveness its product for developing scientific literacy. This research was the development and research design, by pointing to Four D models and Borg & Gall Model. There were 4 main phases (define, design, develop, disseminate) and additional phases (preliminary field testing, main product revision, main field testing, and operational product revision). Research subjects were students of the junior high school in Yogyakarta. The instruments used included questionnaire sheet product validation and scientific literacy test. For the validation data were analyzed descriptively. The test result was analyzed by an N-gain score. The results showed that the appropriateness of worksheet applying NoS within inquiry-based learning approach is eligible based on the assessment from excellent by experts and teachers, students’ scientific literacy can improve high category of the N-gain score at 0.71 by using student worksheet with Nature of Science (NoS) within inquiry approach.
Quadrature formulas for Fourier coefficients
Bojanov, Borislav; Petrova, Guergana
2009-01-01
We consider quadrature formulas of high degree of precision for the computation of the Fourier coefficients in expansions of functions with respect to a system of orthogonal polynomials. In particular, we show the uniqueness of a multiple node
Diffusion coefficient for anomalous transport
International Nuclear Information System (INIS)
1986-01-01
A report on the progress towards the goal of estimating the diffusion coefficient for anomalous transport is given. The gyrokinetic theory is used to identify different time and length scale inherent to the characteristics of plasmas which exhibit anomalous transport
Fuel Temperature Coefficient of Reactivity
Energy Technology Data Exchange (ETDEWEB)
Loewe, W.E.
2001-07-31
A method for measuring the fuel temperature coefficient of reactivity in a heterogeneous nuclear reactor is presented. The method, which is used during normal operation, requires that calibrated control rods be oscillated in a special way at a high reactor power level. The value of the fuel temperature coefficient of reactivity is found from the measured flux responses to these oscillations. Application of the method in a Savannah River reactor charged with natural uranium is discussed.
Properties of Traffic Risk Coefficient
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Directory of Open Access Journals (Sweden)
T. Aly Saandy
2015-08-01
Full Text Available Abstract This article presents to an analytical calculation methodology of the Steinmetz coefficient applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active power consumed by the core is expressed analytically in function of the electrical parameters as resistivity and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The required coefficient is identified from the empirical Steinmetz data based on the experimented active power expression. To verify the relevance of the model validations both by simulations with two in two different frequencies and measurements were carried out. The obtained results are in good agreement with the theoretical approach and the practical results.
Diffusion coefficient calculations for cylindrical cells
International Nuclear Information System (INIS)
Lam-Hime, M.
1983-03-01
An accurate and general diffusion coefficient calculation for cylindrical cells is described using isotropic scattering integral transport theory. This method has been particularly applied to large regular lattices of graphite-moderated reactors with annular coolant channels. The cells are divided into homogeneous zones, and a zone-wise flux expansion is used to formulate a collision probability problem. The reflection of neutrons at the cell boundary is accounted for by the conservation of the neutron momentum. The uncorrected diffusion coefficient Benoist's definition is used, and the described formulation does not neglect any effect. Angular correlation terms, energy coupling non-uniformity and anisotropy of the classical flux are exactly taken into account. Results for gas-graphite typical cells are given showing the importance of these approximations
Nonlinear optical rectification in semiparabolic quantum wells with an applied electric field
International Nuclear Information System (INIS)
Karabulut, ibrahim; Safak, Haluk
2005-01-01
The optical rectification (OR) in a semiparabolic quantum well with an applied electric field has been theoretically investigated. The electronic states in a semiparabolic quantum well with an applied electric field are calculated exactly, within the envelope function and the displaced harmonic oscillator approach. Numerical results are presented for the typical Al x Ga 1- x As/GaAs quantum well. These results show that the applied electric field and the confining potential frequency of the semiparabolic quantum well have a great influence on the OR coefficient. Moreover, the OR coefficient also depends sensitively on the relaxation rate of the semiparabolic quantum well system
Spiegel, Jerry M; Breilh, Jaime; Yassi, Annalee
2015-02-27
Focus on "social determinants of health" provides a welcome alternative to the bio-medical illness paradigm. However, the tendency to concentrate on the influence of "risk factors" related to living and working conditions of individuals, rather than to more broadly examine dynamics of the social processes that affect population health, has triggered critical reaction not only from the Global North but especially from voices the Global South where there is a long history of addressing questions of health equity. In this article, we elaborate on how focusing instead on the language of "social determination of health" has prompted us to attempt to apply a more equity-sensitive approaches to research and related policy and praxis. In this debate, we briefly explore the epistemological and historical roots of epidemiological approaches to health and health equity that have emerged in Latin America to consider its relevance to global discourse. In this region marked by pronounced inequity, context-sensitive concepts such as "collective health" and "critical epidemiology" have been prominent, albeit with limited acknowledgement by the Global North. We illustrate our attempts to apply a social determination approach (and the "4 S" elements of bio-Security, Sovereignty, Solidarity and Sustainability) in five projects within our research collaboration linking researchers and knowledge users in Ecuador and Canada, in diverse settings (health of healthcare workers; food systems; antibiotic resistance; vector borne disease [dengue]; and social circus with street youth). We argue that the language of social determinants lends itself to research that is more reductionist and beckons the development of different skills than would be applied when adopting the language of social determination. We conclude that this language leads to more direct analysis of the systemic factors that drive, promote and reinforce disparities, while at the same time directly considering the emancipatory
Symmetry properties of the transport coefficients of charged particles in disordered materials
International Nuclear Information System (INIS)
Baird, J.K.
1979-01-01
The transport coefficients of a charged particle in an isotropic material are shown to be even functions of the applied electric field. We discuss the limitation which this result and its consequences place upon formulae used to represent these coefficients
Gibson, Grant
2017-12-01
Within contemporary medical practice, Parkinson's disease (PD) is treated using a biomedical, neurological approach, which although bringing numerous benefits can struggle to engage with how people with PD experience the disease. A bio-psycho-social approach has not yet been established in PD; however, bio-psycho-social approaches adopted within dementia care practice could bring significant benefit to PD care. This paper summarises existing bio-psycho-social models of dementia care and explores how these models could also usefully be applied to care for PD. Specifically, this paper adapts the bio-psycho-social model for dementia developed by Spector and Orrell (), to suggest a bio-psycho-social model, which could be used to inform routine care in PD. Drawing on the biopsychosocial model of Dementia put forward by Spector and Orrell (), this paper explores the application of a bio-psycho-social model of PD. This model conceptualises PD as a trajectory, in which several interrelated fixed and tractable factors influence both PD's symptomology and the various biological and psychosocial challenges individuals will face as their disease progresses. Using an individual case study, this paper then illustrates how such a model can assist clinicians in identifying suitable interventions for people living with PD. This model concludes by discussing how a bio-psycho-social model could be used as a tool in PD's routine care. The model also encourages the development of a theoretical and practical framework for the future development of the role of the PD specialist nurse within routine practice. A biopsychosocial approach to Parkinson's Disease provides an opportunity to move towards a holistic model of care practice which addresses a wider range of factors affecting people living with PD. The paper puts forward a framework through which PD care practice can move towards a biopsychosocial perspective. PD specialist nurses are particularly well placed to adopt such a model
Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi
2018-01-01
The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.
Converting Sabine absorption coefficients to random incidence absorption coefficients
DEFF Research Database (Denmark)
Jeong, Cheol-Ho
2013-01-01
are suggested: An optimization method for the surface impedances for locally reacting absorbers, the flow resistivity for extendedly reacting absorbers, and the flow resistance for fabrics. With four porous type absorbers, the conversion methods are validated. For absorbers backed by a rigid wall, the surface...... coefficients to random incidence absorption coefficients are proposed. The overestimations of the Sabine absorption coefficient are investigated theoretically based on Miki's model for porous absorbers backed by a rigid wall or an air cavity, resulting in conversion factors. Additionally, three optimizations...... impedance optimization produces the best results, while the flow resistivity optimization also yields reasonable results. The flow resistivity and flow resistance optimization for extendedly reacting absorbers are also found to be successful. However, the theoretical conversion factors based on Miki's model...
A numerical model for boiling heat transfer coefficient of zeotropic mixtures
Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo
2017-12-01
Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.
Directory of Open Access Journals (Sweden)
Tsymbaliuk Svitlana O.
2017-10-01
Full Text Available The publication is aimed at developing the applied scientific foundations and instrumentarium for implementation of the new tariff terms in the practice of remuneration of employees in the budget sphere. On the basis of the identified problems of remuneration policy in the budgetary sphere, directions of its reform have been outlined. The importance of improving the tariff terms of remuneration together with developing new approaches to the designing a single tariff grid has been substantiated. The scientific-methodical recommendations on how to assess the complexity of tasks and responsibilities and to categorize them by the wage groups have been provided. It has been suggested that the complexity of tasks and responsibilities should be assessed using the points-factor evaluation method, which is applied in projecting the main wages, making use of grades. The factors for assessing the complexity of tasks and responsibilities according to different positions and jobs in the education sphere, science sphere, and science and technology sphere have been defined, the descriptive levels for certain factors have been developed. The assignment of posts and jobs to the qualifying groups upon the results of an assessment of the complexity of tasks and responsibilities has been determined. It has been concluded that use of the elaborated scientific-methodical recommendations would ensure decent remuneration, objective differentiation, transparency, and individualization of wages.
Atomic rate coefficients in a degenerate plasma
Aslanyan, Valentin; Tallents, Greg
2015-11-01
The electrons in a dense, degenerate plasma follow Fermi-Dirac statistics, which deviate significantly in this regime from the usual Maxwell-Boltzmann approach used by many models. We present methods to calculate the atomic rate coefficients for the Fermi-Dirac distribution and present a comparison of the ionization fraction of carbon calculated using both models. We have found that for densities close to solid, although the discrepancy is small for LTE conditions, there is a large divergence from the ionization fraction by using classical rate coefficients in the presence of strong photoionizing radiation. We have found that using these modified rates and the degenerate heat capacity may affect the time evolution of a plasma subject to extreme ultraviolet and x-ray radiation such as produced in free electron laser irradiation of solid targets.
Yoneoka, Daisuke; Henmi, Masayuki
2017-11-30
Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.
Power coefficient anomaly in JOYO
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, H
1980-12-15
Operation of the JOYO experimental fast reactor with the MK-I core has been divided into two phases: (1) 50 MWt power ascension and operation; and (2) 75 MWt power ascension and operation. The 50 MWt power-up tests were conducted in August 1978. In these tests, the measured reactivity loss due to power increases from 15 MWt to 50 MWt was 0.28% ..delta.. K/K, and agreed well with the predicted value of 0.27% ..delta.. K/K. The 75 MWt power ascension tests were conducted in July-August 1979. In the process of the first power increase above 50 MWt to 65 MWt conducted on July 11, 1979, an anomalously large negative power coefficient was observed. The value was about twice the power coefficient values measured in the tests below 50 MW. In order to reproduce the anomaly, the reactor power was decreased and again increased up to the maximum power of 65 MWt. However, the large negative power coefficient was not observed at this time. In the succeeding power increase from 65 MWt to 75 MWt, a similar anomalous power coefficient was again observed. This anomaly disappeared in the subsequent power ascensions to 75 MWt, and the magnitude of the power coefficient gradually decreased with power cycles above the 50 MWt level.
International Nuclear Information System (INIS)
Graham, Margaret C.; Oliver, Ian W.; MacKenzie, Angus B.; Ellam, Robert M.; Farmer, John G.
2008-01-01
Methods for the fractionation of aquatic colloids require careful application to ensure efficient, accurate and reproducible separations. This paper describes the novel combination of mild colloidal fractionation and characterisation methods, namely centrifugal ultrafiltration, gel electrophoresis and gel filtration along with spectroscopic (UV-visible) and elemental (Inductively Coupled Plasma-Optical Emission Spectroscopy, Inductively Coupled Plasma-Mass Spectrometry) analysis, an approach which produced highly consistent results, providing improved confidence in these methods. Application to the study of the colloidal and dissolved components of soil porewaters from one soil at a depleted uranium (DU)-contaminated site revealed uranium (U) associations with both large (100 kDa-0.2 μm) and small (3-30 kDa) humic colloids. For a nearby soil with lower organic matter content, however, association with large (100 kDa-0.2 μm) iron (Fe)-aluminium (Al) colloids in addition to an association with small (3-30 kDa) humic colloids was observed. The integrated colloid fractionation approach presented herein can now be applied with confidence to investigate U and indeed other trace metal migration in soil and aquatic systems
Pachhai, S.; Masters, G.; Laske, G.
2017-12-01
Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic
Analysis of internal conversion coefficients
International Nuclear Information System (INIS)
Coursol, N.; Gorozhankin, V.M.; Yakushev, E.A.; Briancon, C.; Vylov, Ts.
2000-01-01
An extensive database has been assembled that contains the three most widely used sets of calculated internal conversion coefficients (ICC): [Hager R.S., Seltzer E.C., 1968. Internal conversion tables. K-, L-, M-shell Conversion coefficients for Z=30 to Z=103, Nucl. Data Tables A4, 1-237; Band I.M., Trzhaskovskaya M.B., 1978. Tables of gamma-ray internal conversion coefficients for the K-, L- and M-shells, 10≤Z≤104, Special Report of Leningrad Nuclear Physics Institute; Roesel F., Fries H.M., Alder K., Pauli H.C., 1978. Internal conversion coefficients for all atomic shells, At. Data Nucl. Data Tables 21, 91-289] and also includes new Dirac-Fock calculations [Band I.M. and Trzhaskovskaya M.B., 1993. Internal conversion coefficients for low-energy nuclear transitions, At. Data Nucl. Data Tables 55, 43-61]. This database is linked to a computer program to plot ICCs and their combinations (sums and ratios) as a function of Z and energy, as well as relative deviations of ICC or their combinations for any pair of tabulated data. Examples of these analyses are presented for the K-shell and total ICCs of the gamma-ray standards [Hansen H.H., 1985. Evaluation of K-shell and total internal conversion coefficients for some selected nuclear transitions, Eur. Appl. Res. Rept. Nucl. Sci. Tech. 11.6 (4) 777-816] and for the K-shell and total ICCs of high multipolarity transitions (total, K-, L-, M-shells of E3 and M3 and K-shell of M4). Experimental data sets are also compared with the theoretical values of these specific calculations
Algebraic polynomials with random coefficients
Directory of Open Access Journals (Sweden)
K. Farahmand
2002-01-01
Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.
Analysis of flow coefficient in chair manufacture
Directory of Open Access Journals (Sweden)
Ivković Dragoljub
2005-01-01
Full Text Available The delivery on time is not possible without the good-quality planning of deadlines, i.e. planning of the manufacturing process duration. The study of flow coefficient enables the realistic forecasting of the manufacturing process duration. This paper points to the significance of the study of flow coefficient on scientific basis so as to determine the terms of the end of the manufacture of chairs made of sawn timber. Chairs are the products of complex construction, often almost completely made of sawn timber as the basic material. They belong to the group of export products, so it is especially significant to analyze the duration of the production cycle, and the type and the degree of stoppages in this type of production. Parallel method of production is applied in chair manufacture. The study shows that the value of flow coefficient is close to one or higher, in most cases. The results indicate that the percentage of interoperational stoppage is unjustifiably high, so it is proposed how to decrease the percentage of stoppages in the manufacturing process.
International Nuclear Information System (INIS)
Anon.
1980-01-01
The Physics Division research program that is dedicated primarily to applied research goals involves the interaction of energetic particles with solids. This applied research is carried out in conjunction with the basic research studies from which it evolved
Irrational "Coefficients" in Renaissance Algebra.
Oaks, Jeffrey A
2017-06-01
Argument From the time of al-Khwārizmī in the ninth century to the beginning of the sixteenth century algebraists did not allow irrational numbers to serve as coefficients. To multiply by x, for instance, the result was expressed as the rhetorical equivalent of . The reason for this practice has to do with the premodern concept of a monomial. The coefficient, or "number," of a term was thought of as how many of that term are present, and not as the scalar multiple that we work with today. Then, in sixteenth-century Europe, a few algebraists began to allow for irrational coefficients in their notation. Christoff Rudolff (1525) was the first to admit them in special cases, and subsequently they appear more liberally in Cardano (1539), Scheubel (1550), Bombelli (1572), and others, though most algebraists continued to ban them. We survey this development by examining the texts that show irrational coefficients and those that argue against them. We show that the debate took place entirely in the conceptual context of premodern, "cossic" algebra, and persisted in the sixteenth century independent of the development of the new algebra of Viète, Decartes, and Fermat. This was a formal innovation violating prevailing concepts that we propose could only be introduced because of the growing autonomy of notation from rhetorical text.
Integer Solutions of Binomial Coefficients
Gilbertson, Nicholas J.
2016-01-01
A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…
Simulating WTP Values from Random-Coefficient Models
Maurus Rischatsch
2009-01-01
Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...
Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy
2014-10-01
Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the ``liquid drop'' model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were
Non-linear Bayesian update of PCE coefficients
Litvinenko, Alexander
2014-01-06
Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).
A practical relation between atomic numbers and alpha coefficients
International Nuclear Information System (INIS)
Lachance, G.R.
1980-01-01
A first approximation indicates that fundamental alpha coefficients for a given analyte vary as a function of the ratio of their respective atomic number raised to a power. This simple rule applies mainly at the limits (i.e., when the weight fraction of analyte i, Wsub(i) is of the order of 0.0 or 1.0) in cases of absorption and weak enhancement. The relation thus provides a means of generating coefficients for the system i-k from experimental data obtained on system i-j and a means of verifying experimental alphas, since arrays of coefficients must show a high degree of concordance. (author)
Non-linear Bayesian update of PCE coefficients
Litvinenko, Alexander; Matthies, Hermann G.; Pojonk, Oliver; Rosic, Bojana V.; Zander, Elmar
2014-01-01
Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).
Sáez, Carlos; Zurriaga, Oscar; Pérez-Panadés, Jordi; Melchor, Inma; Robles, Montserrat; García-Gómez, Juan M
2016-11-01
To assess the variability in data distributions among data sources and over time through a case study of a large multisite repository as a systematic approach to data quality (DQ). Novel probabilistic DQ control methods based on information theory and geometry are applied to the Public Health Mortality Registry of the Region of Valencia, Spain, with 512 143 entries from 2000 to 2012, disaggregated into 24 health departments. The methods provide DQ metrics and exploratory visualizations for (1) assessing the variability among multiple sources and (2) monitoring and exploring changes with time. The methods are suited to big data and multitype, multivariate, and multimodal data. The repository was partitioned into 2 probabilistically separated temporal subgroups following a change in the Spanish National Death Certificate in 2009. Punctual temporal anomalies were noticed due to a punctual increment in the missing data, along with outlying and clustered health departments due to differences in populations or in practices. Changes in protocols, differences in populations, biased practices, or other systematic DQ problems affected data variability. Even if semantic and integration aspects are addressed in data sharing infrastructures, probabilistic variability may still be present. Solutions include fixing or excluding data and analyzing different sites or time periods separately. A systematic approach to assessing temporal and multisite variability is proposed. Multisite and temporal variability in data distributions affects DQ, hindering data reuse, and an assessment of such variability should be a part of systematic DQ procedures. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Regional water coefficients for U.S. industrial sectors
Directory of Open Access Journals (Sweden)
Riccardo Boero
2017-12-01
Full Text Available Designing policies for water systems management requires the capability to assess the economic impacts of water availability and to effectively couple water withdrawals by human activities with natural hydrologic dynamics. At the core of any scientific approach to these issues there is the estimation of water withdrawals by industrial sectors in the form of water coefficients, which are measurements of the quantity of water withdrawn per dollar of GDP or output. In this work we focus on the contiguous United States and on the estimation of water coefficients for regional scale analyses. We first compare an established methodology for the estimation of national water coefficients with a parametric one we propose. Second, we introduce a method to estimate water coefficients at the level of ecological regions and we discuss how they reduce possible biases in regional analyses of water systems. We conclude discussing advantages and limits of regional water coefficients.
Bolton, Matthew; Moore, Imogen; Ferreira, Ana; Day, Crispin; Bolton, Derek
2016-03-01
The importance of community engagement in health is widely recognized, and key themes in UK National Institute for Health and Clinical Excellence (NICE) recommendations for enhancing community engagement are co-production and community control. This study reports an innovative approach to community engagement using the community-organizing methodology, applied in an intervention of social support to increase social capital, reduce stress and improve well-being in mothers who were pregnant and/or with infants aged 0-2 years. Professional community organizers in Citizens-UK worked with local member civic institutions in south London to facilitate social support to a group of 15 new mothers. Acceptability of the programme, adherence to principles of co-production and community control, and changes in the outcomes of interest were assessed quantitatively in a quasi-experimental design. The programme was found to be feasible and acceptable to participating mothers, and perceived by them to involve co-production and community control. There were no detected changes in subjective well-being, but there were important reductions in distress on a standard self-report measure (GHQ-12). There were increases in social capital of a circumscribed kind associated with the project. Community organizing provides a promising model and method of facilitating community engagement in health. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health.
Mondal, Arobendo; Kaupp, Martin
2018-04-05
A novel protocol to compute and analyze NMR chemical shifts for extended paramagnetic solids, accounting comprehensively for Fermi-contact (FC), pseudocontact (PC), and orbital shifts, is reported and applied to the important lithium ion battery cathode materials LiFePO 4 and LiCoPO 4 . Using an EPR-parameter-based ansatz, the approach combines periodic (hybrid) DFT computation of hyperfine and orbital-shielding tensors with an incremental cluster model for g- and zero-field-splitting (ZFS) D-tensors. The cluster model allows the use of advanced multireference wave function methods (such as CASSCF or NEVPT2). Application of this protocol shows that the 7 Li shifts in the high-voltage cathode material LiCoPO 4 are dominated by spin-orbit-induced PC contributions, in contrast with previous assumptions, fundamentally changing interpretations of the shifts in terms of covalency. PC contributions are smaller for the 7 Li shifts of the related LiFePO 4 , where FC and orbital shifts dominate. The 31 P shifts of both materials finally are almost pure FC shifts. Nevertheless, large ZFS contributions can give rise to non-Curie temperature dependences for both 7 Li and 31 P shifts.
Ponterotto, Joseph G; Ruckdeschel, Daniel E
2007-12-01
The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.
Scheibler, Robin; Hurley, Paul
2012-03-01
We present a novel, accurate and fast algorithm to obtain Fourier series coefficients from an IC layer whose description consists of rectilinear polygons on a plane, and how to implement it using off-the-shelf hardware components. Based on properties of Fourier calculus, we derive a relationship between the Discrete Fourier Transforms of the sampled mask transmission function and its continuous Fourier series coefficients. The relationship leads to a straightforward algorithm for computing the continuous Fourier series coefficients where one samples the mask transmission function, compute its discrete Fourier transform and applies a frequency-dependent multiplicative factor. The algorithm is guaranteed to yield the exact continuous Fourier series coefficients for any sampling representing the mask function exactly. Computationally, this leads to significant saving by allowing to choose the maximal such pixel size and reducing the fast Fourier transform size by as much, without compromising accuracy. In addition, the continuous Fourier series is free from aliasing and follows closely the physical model of Fourier optics. We show that in some cases this can make a significant difference, especially in modern very low pitch technology nodes.