WorldWideScience

Sample records for conditions measurement methodology

  1. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  2. Methodology for measurement of diesel particle size distributions from a city bus working in real traffic conditions

    International Nuclear Information System (INIS)

    Armas, O; Gómez, A; Mata, C

    2011-01-01

    The study of particulate matter (PM) and nitrogen oxides emissions of diesel engines is nowadays a necessary step towards pollutant emission reduction. For a complete evaluation of PM emissions and its size characterization, one of the most challenging goals is to adapt the available techniques and the data acquisition procedures to the measurement and to propose a methodology for the interpretation of instantaneous particle size distributions (PSD) of combustion-derived particles produced by a vehicle during real driving conditions. In this work, PSD from the exhaust gas of a city bus operated in real driving conditions with passengers have been measured. For the study, the bus was equipped with a rotating disk diluter coupled to an air supply thermal conditioner (with an evaporating tube), the latter being connected to a TSI Engine Exhaust Particle Sizer spectrometer. The main objective of this work has been to propose an alternative procedure for evaluating the influence of several transient sequences on PSD emitted by a city bus used in real driving conditions with passengers. The transitions studied were those derived from the combination of four possible sequences or categories during real driving conditions: idle, acceleration, deceleration with fuel consumption and deceleration without fuel consumption. The analysis methodology used in this work proved to be a useful tool for a better understanding of the phenomena related to the determination of PSD emitted by a city bus during real driving conditions with passengers

  3. Methodology for measurement of diesel particle size distributions from a city bus working in real traffic conditions

    Science.gov (United States)

    Armas, O.; Gómez, A.; Mata, C.

    2011-10-01

    The study of particulate matter (PM) and nitrogen oxides emissions of diesel engines is nowadays a necessary step towards pollutant emission reduction. For a complete evaluation of PM emissions and its size characterization, one of the most challenging goals is to adapt the available techniques and the data acquisition procedures to the measurement and to propose a methodology for the interpretation of instantaneous particle size distributions (PSD) of combustion-derived particles produced by a vehicle during real driving conditions. In this work, PSD from the exhaust gas of a city bus operated in real driving conditions with passengers have been measured. For the study, the bus was equipped with a rotating disk diluter coupled to an air supply thermal conditioner (with an evaporating tube), the latter being connected to a TSI Engine Exhaust Particle Sizer spectrometer. The main objective of this work has been to propose an alternative procedure for evaluating the influence of several transient sequences on PSD emitted by a city bus used in real driving conditions with passengers. The transitions studied were those derived from the combination of four possible sequences or categories during real driving conditions: idle, acceleration, deceleration with fuel consumption and deceleration without fuel consumption. The analysis methodology used in this work proved to be a useful tool for a better understanding of the phenomena related to the determination of PSD emitted by a city bus during real driving conditions with passengers.

  4. Radon flux measurement methodologies

    International Nuclear Information System (INIS)

    Nielson, K.K.; Rogers, V.C.

    1984-01-01

    Five methods for measuring radon fluxes are evaluated: the accumulator can, a small charcoal sampler, a large-area charcoal sampler, the ''Big Louie'' charcoal sampler, and the charcoal tent sampler. An experimental comparison of the five flux measurement techniques was also conducted. Excellent agreement was obtained between the measured radon fluxes and fluxes predicted from radium and emanation measurements

  5. Development of the methodology of exhaust emissions measurement under RDE (Real Driving Emissions) conditions for non-road mobile machinery (NRMM) vehicles

    Science.gov (United States)

    Merkisz, J.; Lijewski, P.; Fuc, P.; Siedlecki, M.; Ziolkowski, A.

    2016-09-01

    The paper analyzes the exhaust emissions from farm vehicles based on research performed under field conditions (RDE) according to the NTE procedure. This analysis has shown that it is hard to meet the NTE requirements under field conditions (engine operation in the NTE zone for at least 30 seconds). Due to a very high variability of the engine conditions, the share of a valid number of NTE windows in the field test is small throughout the entire test. For this reason, a modification of the measurement and exhaust emissions calculation methodology has been proposed for farm vehicles of the NRMM group. A test has been developed composed of the following phases: trip to the operation site (paved roads) and field operations (including u-turns and maneuvering). The range of the operation time share in individual test phases has been determined. A change in the method of calculating the real exhaust emissions has also been implemented in relation to the NTE procedure.

  6. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  7. The impact of methodology in innovation measurement

    Energy Technology Data Exchange (ETDEWEB)

    Wilhelmsen, L.; Bugge, M.; Solberg, E.

    2016-07-01

    Innovation surveys and rankings such as the Community Innovation Survey (CIS) and Innovation Union Scoreboard (IUS) have developed into influential diagnostic tools that are often used to categorize countries according to their innovation performance and to legitimise innovation policies. Although a number of ongoing processes are seeking to improve existing frameworks for measuring innovation, there are large methodological differences across countries in the way innovation is measured. This causes great uncertainty regarding a) the coherence between data from innovation surveys, b) actual innovativeness of the economy, and c) the validity of research based on innovation data. Against this background we explore empirically how different survey methods for measuring innovation affect reported innovation performance. The analysis is based on a statistical exercise comparing the results from three different methodological versions of the same survey for measuring innovation in the business enterprise sector in Norway. We find striking differences in reported innovation performance depending on how the surveys are carried out methodologically. The paper concludes that reported innovation performance is highly sensitive to and strongly conditioned by methodological context. This represents a need for increased caution and awareness around data collection and research based on innovation data, and not least in terms of aggregation of data and cross-country comparison. (Author)

  8. Methodology for reliability based condition assessment

    International Nuclear Information System (INIS)

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period

  9. Methodological Comparison between a Novel Automatic Sampling System for Gas Chromatography versus Photoacoustic Spectroscopy for Measuring Greenhouse Gas Emissions under Field Conditions

    Directory of Open Access Journals (Sweden)

    Alexander J. Schmithausen

    2016-10-01

    Full Text Available Trace gases such as nitrous oxide (N2O, methane (CH4, and carbon dioxide (CO2 are climate-related gases, and their emissions from agricultural livestock barns are not negligible. Conventional measurement systems in the field (Fourier transform infrared spectroscopy (FTIR; photoacoustic system (PAS are not sufficiently sensitive to N2O. Laser-based measurement systems are highly accurate, but they are very expensive to purchase and maintain. One cost-effective alternative is gas chromatography (GC with electron capture detection (ECD, but this is not suitable for field applications due to radiation. Measuring samples collected automatically under field conditions in the laboratory at a subsequent time presents many challenges. This study presents a sampling designed to promote laboratory analysis of N2O concentrations sampled under field conditions. Analyses were carried out using PAS in the field (online system and GC in the laboratory (offline system. Both measurement systems showed a good correlation for CH4 and CO2 concentrations. Measured N2O concentrations were near the detection limit for PAS. GC achieved more reliable results for N2O in very low concentration ranges.

  10. Structural health monitoring methodology for aircraft condition-based maintenance

    Science.gov (United States)

    Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre

    2001-06-01

    Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.

  11. A methodology to measure the degre of managerial innovation

    Directory of Open Access Journals (Sweden)

    Mustafa Batuhan Ayhan

    2014-01-01

    Full Text Available Purpose: The main objective of this study is to introduce the concept of managerial innovation and to propose a quantitative methodology to measure the degree of managerial innovation capability by analyzing the evolution of the techniques used for management functions.Design/methodology/approach: The methodology mainly focuses on the different techniques used for each management functions namely; Planning, Organizing, Leading, Controlling and Coordinating. These functions are studied and the different techniques used for them are listed. Since the techniques used for these management functions evolve in time due to technological and social changes, a methodology is required to measure the degree of managerial innovation capability. This competency is measured through an analysis performed to point out which techniques used for each of these functions.Findings: To check the validity and applicability of this methodology, it is implemented to a manufacturing company. Depending on the results of the implementation, enhancements are suggested to the company for each function to survive in the changing managerial conditionsResearch limitations/implications: The primary limitation of this study is the implementation area. Although the study is implemented in just a single manufacturing company, it is welcomed to apply the same methodology to measure the managerial innovation capabilities of other manufacturing companies. Moreover, the model is ready to be adapted to different sectors although it is mainly prepared for manufacturing sector.Originality/value: Although innovation management is widely studied, managerial innovation is a new concept and introduced to measure the capability to challenge the changes occur in managerial functions. As a brief this methodology aims to be a pioneer in the field of managerial innovation regarding the evolution of management functions. Therefore it is expected to lead more studies to inspect the progress of

  12. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    Science.gov (United States)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  13. Methodology for Multileaf Collimator Quality Assurance in clinical conditions

    International Nuclear Information System (INIS)

    Diaz M, R. M.; Rodriguez Z, M.; Juarez D, A.; Romero R, R.

    2013-01-01

    Multileaf Collimators (MLCs) have become an important technological advance as part of clinical linear accelerators (linacs) for radiotherapy. Treatment planning and delivery were substantially modified after these devices. However, it was needed to develop Quality Assurance (QA) methodologies related to the performance of these developments. The most common methods for QA of MLC are made in basic conditions that hardly cover all possible difficulties in clinical practice. Diaz et. el. developed a methodology based upon volumetric detectors bidimensional arrays that can be extended to more demanding situations. In this work, the Auril methodology of Diaz et. al. was implemented to the irradiation with the linac gantry in horizontal position. A mathematical procedure was developed to ease the dosimetric centering of the device with the Auril centering tool. System calibration was made as in the typical Auril methodology. Patterns with leaf misplacements in known positions were irradiated. the method allowed the detection of leafs' misplacements with a minimum number of false positives. We concluded that Auril methodology can be applied in clinical conditions. (Author)

  14. Relative Hazard and Risk Measure Calculation Methodology

    International Nuclear Information System (INIS)

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.; Andrews, William B.; Walton, Terry L.

    2003-01-01

    The RHRM equations, as represented in methodology and code presented in this report, are primarily a collection of key factors normally used in risk assessment that are relevant to understanding the hazards and risks associated with projected mitigation, cleanup, and risk management activities. The RHRM code has broad application potential. For example, it can be used to compare one mitigation, cleanup, or risk management activity with another, instead of just comparing it to just the fixed baseline. If the appropriate source term data are available, it can be used in its non-ratio form to estimate absolute values of the associated controlling hazards and risks. These estimated values of controlling hazards and risks can then be examined to help understand which mitigation, cleanup, or risk management activities are addressing the higher hazard conditions and risk reduction potential at a site. Graphics can be generated from these absolute controlling hazard and risk values to graphically compare these high hazard and risk reduction potential conditions. If the RHRM code is used in this manner, care must be taken to specifically define and qualify (e.g., identify which factors were considered and which ones tended to drive the hazard and risk estimates) the resultant absolute controlling hazard and risk values

  15. Thermotactile perception thresholds measurement conditions.

    Science.gov (United States)

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  16. Methodology for quantitative assessment of technical condition in industrial systems

    Energy Technology Data Exchange (ETDEWEB)

    Steinbach, C. [Marintek AS (Norway); Soerli, A. [Statoil (Norway)

    1998-12-31

    As part of the Eureka project Ageing Management a methodology has been developed to assess the technical condition of industrial systems. The first part of the presentation argues for the use of technical condition parameters in the context of maintenance strategies. Thereafter the term `technical condition` is defined more thoroughly as it is used within the project. It is claimed that the technical condition of a system - such as a feed water system of a nuclear power plant, or a water injection system on an oil platform - may be determined by aggregating the condition of its smaller components using a hierarchic approach. The hierarchy has to be defined in co-operation with experienced personnel and reflects the impact of degradation of elements on a lower level to nodes higher in the hierarchy. The impact is divided into five categories with respect to safety, environment, availability, costs and man-hours. To determine the technical condition of the bottom elements of the hierarchy, available data is used from both an on-line condition monitoring system and maintenance history. The second part of the presentation introduces the prototype software tool TeCoMan which utilises the theory and applies it to installations of the participating companies. First results and gained experiences with the method and tool are discussed. (orig.)

  17. Methodology for quantitative assessment of technical condition in industrial systems

    Energy Technology Data Exchange (ETDEWEB)

    Steinbach, C [Marintek AS (Norway); Soerli, A [Statoil (Norway)

    1999-12-31

    As part of the Eureka project Ageing Management a methodology has been developed to assess the technical condition of industrial systems. The first part of the presentation argues for the use of technical condition parameters in the context of maintenance strategies. Thereafter the term `technical condition` is defined more thoroughly as it is used within the project. It is claimed that the technical condition of a system - such as a feed water system of a nuclear power plant, or a water injection system on an oil platform - may be determined by aggregating the condition of its smaller components using a hierarchic approach. The hierarchy has to be defined in co-operation with experienced personnel and reflects the impact of degradation of elements on a lower level to nodes higher in the hierarchy. The impact is divided into five categories with respect to safety, environment, availability, costs and man-hours. To determine the technical condition of the bottom elements of the hierarchy, available data is used from both an on-line condition monitoring system and maintenance history. The second part of the presentation introduces the prototype software tool TeCoMan which utilises the theory and applies it to installations of the participating companies. First results and gained experiences with the method and tool are discussed. (orig.)

  18. Methodological Challenges in Measuring Child Maltreatment

    Science.gov (United States)

    Fallon, Barbara; Trocme, Nico; Fluke, John; MacLaurin, Bruce; Tonmyr, Lil; Yuan, Ying-Ying

    2010-01-01

    Objective: This article reviewed the different surveillance systems used to monitor the extent of reported child maltreatment in North America. Methods: Key measurement and definitional differences between the surveillance systems are detailed and their potential impact on the measurement of the rate of victimization. The infrastructure…

  19. LDA measurements under plasma conditions

    International Nuclear Information System (INIS)

    Lesinski, J.; Mizera-Lesinska, B.; Fanton, J.C.; Boulos, M.I.

    1979-01-01

    A study was made of the application of Laser Doppler Anemometry (LDA) for the measurement of the fluid and particle velocities under plasma conditions. The flow configuration, is that of a dc plasma jet called the principal jet, in which an alumina powder of a mean particle diameter of 115 μm and a standard deviation of 11.3 μm was injected using a secondary jet. The plasma jet immerged from a 7.1 mm ID nozzle while that of the secondary jet was 2 nm in diameter. The secondary jet was introduced at the nozzle level of the plasma jet directed 90 0 to its axis. Details of the nozzle and the gas flow system are shown in Figure 2

  20. A methodology to measure the degre of managerial innovation

    OpenAIRE

    Ayhan, Mustafa Batuhan; Oztemel, Ercan

    2014-01-01

    Purpose: The main objective of this study is to introduce the concept of managerial innovation and to propose a quantitative methodology to measure the degree of managerial innovation capability by analyzing the evolution of the techniques used for management functions.Design/methodology/approach: The methodology mainly focuses on the different techniques used for each management functions namely; Planning, Organizing, Leading, Controlling and Coordinating. These functions are studied and the...

  1. Methodological aspects of EEG and Body dynamics measurements during motion.

    Directory of Open Access Journals (Sweden)

    Pedro eReis

    2014-03-01

    Full Text Available EEG involves recording, analysis, and interpretation of voltages recorded on the human scalp originating from brain grey matter. EEG is one of the favorite methods to study and understand processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements, that are performed in response to the environment. However, there are methodological difficulties when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions of how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determination of real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks.

  2. Methodology of external exposure calculation for reuse of conditional released materials from decommissioning - 59138

    International Nuclear Information System (INIS)

    Ondra, Frantisek; Vasko, Marek; Necas, Vladimir

    2012-01-01

    The article presents methodology of external exposure calculation for reuse of conditional released materials from decommissioning using VISIPLAN 3D ALARA planning tool. Production of rails has been used as an example application of proposed methodology within the CONRELMAT project. The article presents a methodology for determination of radiological, material, organizational and other conditions for conditionally released materials reuse to ensure that workers and public exposure does not breach the exposure limits during scenario's life cycle (preparation, construction and operation of scenario). The methodology comprises a proposal of following conditions in the view of workers and public exposure: - radionuclide limit concentration of conditionally released materials for specific scenarios and nuclide vectors, - specific deployment of conditionally released materials eventually shielding materials, workers and public during the scenario's life cycle, - organizational measures concerning time of workers or public stay in the vicinity on conditionally released materials for individual performed scenarios and nuclide vectors. The above mentioned steps of proposed methodology have been applied within the CONRELMAT project. Exposure evaluation of workers for rail production is introduced in the article as an example of this application. Exposure calculation using VISIPLAN 3D ALARA planning tool was done within several models. The most exposed profession for scenario was identified. On the basis of this result, an increase of radionuclide concentration in conditional released material was proposed more than two times to 681 Bq/kg without no additional safety or organizational measures being applied. After application of proposed safety and organizational measures (additional shielding, geometry changes and limitation of work duration) it is possible to increase concentration of radionuclide in conditional released material more than ten times to 3092 Bq/kg. Storage

  3. Performance Evaluation and Measurement of the Organization in Strategic Analysis and Control: Methodological Aspects

    OpenAIRE

    Živan Ristić; Neđo Balaban

    2006-01-01

    Information acquired by measuring and evaluation are a necessary condition for good decision-making in strategic management. This work deals with : (a) Methodological aspects of evaluation (kinds of evaluation, metaevaluation) and measurement (supposition of isomorphism in measurement, kinds and levels of measurement, errors in measurement and the basic characteristics of measurement) (b) Evaluation and measurement of potential and accomplishments of the organization in Kaplan-Norton perspect...

  4. Holdup measurements under realistic conditions

    International Nuclear Information System (INIS)

    Sprinkel, J.K. Jr.; Marshall, R.; Russo, P.A.; Siebelist, R.

    1997-01-01

    This paper reviews the documentation of the precision and bias of holdup (residual nuclear material remaining in processing equipment) measurements and presents previously unreported results. Precision and bias results for holdup measurements are reported from training seminars with simulated holdup, which represent the best possible results, and compared to actual plutonium processing facility measurements. Holdup measurements for plutonium and uranium processing plants are also compared to reference values. Recommendations for measuring holdup are provided for highly enriched uranium facilities and for low enriched uranium facilities. The random error component of holdup measurements is less than the systematic error component. The most likely factor in measurement error is incorrect assumptions about the measurement, such as background, measurement geometry, or signal attenuation. Measurement precision on the order of 10% can be achieved with some difficulty. Bias of poor quality holdup measurement can also be improved. However, for most facilities, holdup measurement errors have no significant impact on inventory difference, sigma, or safety (criticality, radiation, or environmental); therefore, it is difficult to justify the allocation of more resources to improving holdup measurements. 25 refs., 10 tabs

  5. Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review.

    Science.gov (United States)

    Chung, Stephanie T; Chacko, Shaji K; Sunehag, Agneta L; Haymond, Morey W

    2015-12-01

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  6. Methodology for performing measurements to release material from radiological control

    International Nuclear Information System (INIS)

    Durham, J.S.; Gardner, D.L.

    1993-09-01

    This report describes the existing and proposed methodologies for performing measurements of contamination prior to releasing material for uncontrolled use at the Hanford Site. The technical basis for the proposed methodology, a modification to the existing contamination survey protocol, is also described. The modified methodology, which includes a large-area swipe followed by a statistical survey, can be used to survey material that is unlikely to be contaminated for release to controlled and uncontrolled areas. The material evaluation procedure that is used to determine the likelihood of contamination is also described

  7. Defining and Measuring Chronic Conditions

    Centers for Disease Control (CDC) Podcasts

    This podcast is an interview with Dr. Anand Parekh, U.S. Department of Health and Human Services Deputy Assistant Secretary for Health, and Dr. Samuel Posner, Preventing Chronic Disease Editor in Chief, about the definition and burden of multiple chronic conditions in the United States.

  8. Defining and Measuring Chronic Conditions

    Centers for Disease Control (CDC) Podcasts

    2013-05-20

    This podcast is an interview with Dr. Anand Parekh, U.S. Department of Health and Human Services Deputy Assistant Secretary for Health, and Dr. Samuel Posner, Preventing Chronic Disease Editor in Chief, about the definition and burden of multiple chronic conditions in the United States.  Created: 5/20/2013 by Preventing Chronic Disease (PCD), National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 5/20/2013.

  9. The Piston Compressor: The Methodology of the Real-Time Condition Monitoring

    International Nuclear Information System (INIS)

    Naumenko, A P; Kostyukov, V N

    2012-01-01

    The methodology of a diagnostic signal processing, a function chart of the monitoring system are considered in the article. The methodology of monitoring and diagnosing is based on measurement of indirect processes' parameters (vibroacoustic oscillations) therefore no more than five sensors is established on the cylinder, measurement of direct structural and thermodynamic parameters is envisioned as well. The structure and principle of expert system's functioning of decision-making is given. Algorithm of automatic expert system includes the calculation diagnostic attributes values based on their normative values, formation sets of diagnostic attributes that correspond to individual classes to malfunction, formation of expert system messages. The scheme of a real-time condition monitoring system for piston compressors is considered. The system have consistently-parallel structure of information-measuring equipment, which allows to measure the vibroacoustic signal for condition monitoring of reciprocating compressors and modes of its work. Besides, the system allows to measure parameters of other physical processes, for example, system can measure and use for monitoring and statements of the diagnosis the pressure in decreasing spaces (the indicator diagram), the inlet pressure and flowing pressure of each cylinder, inlet and delivery temperature of gas, valves temperature, position of a rod, leakage through compression packing and others.

  10. Advanced quantitative measurement methodology in physics education research

    Science.gov (United States)

    Wang, Jing

    parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The

  11. Methodology applied in Cuba for siting, designing, and building a radioactive waste repository under safety conditions

    International Nuclear Information System (INIS)

    Orbera, L.; Peralta, J.L.; Franklin, R.; Gil, R.; Chales, G.; Rodriguez, A.

    1993-01-01

    The work presents the methodology used in Cuba for siting, designing, and building a radioactive waste repository safely. This methodology covers both the technical and socio-economic factors, as well as those of design and construction so as to have a safe siting for this kind of repository under Cuba especial condition. Applying this methodology will results in a safe repository

  12. Measuring the Quality of Publications : New Methodology and Case Study

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; van Groenendaal, W.J.H.

    2000-01-01

    n practice, it is important to evaluate the quality of research, in order to make decisions on tenure, funding, and so on. This article develops a methodology using citations to measure the quality of journals, proceedings, and book publishers. (Citations are also used by the Science and Social

  13. Methodology of high-resolution photography for mural condition database

    Science.gov (United States)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  14. Risk importance measures in the dynamic flowgraph methodology

    International Nuclear Information System (INIS)

    Tyrväinen, T.

    2013-01-01

    This paper presents new risk importance measures applicable to a dynamic reliability analysis approach with multi-state components. Dynamic reliability analysis methods are needed because traditional methods, such as fault tree analysis, can describe system's dynamical behaviour only in limited manner. Dynamic flowgraph methodology (DFM) is an approach used for analysing systems with time dependencies and feedback loops. The aim of DFM is to identify root causes of a top event, usually representing the system's failure. Components of DFM models are analysed at discrete time points and they can have multiple states. Traditional risk importance measures developed for static and binary logic are not applicable to DFM as such. Some importance measures have previously been developed for DFM but their ability to describe how components contribute to the top event is fairly limited. The paper formulates dynamic risk importance measures that measure the importances of states of components and take the time-aspect of DFM into account in a logical way that supports the interpretation of results. Dynamic risk importance measures are developed as generalisations of the Fussell-Vesely importance and the risk increase factor. -- Highlights: • New risk importance measures are developed for the dynamic flowgraph methodology. • Dynamic risk importance measures are formulated for states of components. • An approach to handle failure modes of a component in DFM is presented. • Dynamic risk importance measures take failure times into account. • Component's influence on the system's reliability can be analysed in detail

  15. Telemetric measurement system of beehive environment conditions

    Science.gov (United States)

    Walendziuk, Wojciech; Sawicki, Aleksander

    2014-11-01

    This work presents a measurement system of beehive environmental conditions. The purpose of the device is to perform measurements of parameters such as ambient temperature, atmospheric pressure, internal temperature, humidity and sound level. The measured values were transferred to the MySQL database, which is located on an external server, with the use of GPRS protocol. A website presents the measurement data in the form of tables and graphs. The study also shows exemplary results of environmental conditions measurements recorded in the beehive by hour cycle.

  16. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  17. A statistical methodology for the estimation of extreme wave conditions for offshore renewable applications

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kalogeri, Christina; Galanis, George

    2015-01-01

    and post-process outputs from a high resolution numerical wave modeling system for extreme wave estimation based on the significant wave height. This approach is demonstrated through the data analysis at a relatively deep water site, FINO 1, as well as a relatively shallow water area, coastal site Horns...... as a characteristic index of extreme wave conditions. The results from the proposed methodology seem to be in a good agreement with the measurements at both the relatively deep, open water and the shallow, coastal water sites, providing a potentially useful tool for offshore renewable energy applications. © 2015...... Rev, which is located in the North Sea, west of Denmark. The post-processing targets at correcting the modeled time series of the significant wave height, in order to match the statistics of the corresponding measurements, including not only the conventional parameters such as the mean and standard...

  18. Defining Multiple Chronic Conditions for Quality Measurement.

    Science.gov (United States)

    Drye, Elizabeth E; Altaf, Faseeha K; Lipska, Kasia J; Spatz, Erica S; Montague, Julia A; Bao, Haikun; Parzynski, Craig S; Ross, Joseph S; Bernheim, Susannah M; Krumholz, Harlan M; Lin, Zhenqiu

    2018-02-01

    Patients with multiple chronic conditions (MCCs) are a critical but undefined group for quality measurement. We present a generally applicable systematic approach to defining an MCC cohort of Medicare fee-for-service beneficiaries that we developed for a national quality measure, risk-standardized rates of unplanned admissions for Accountable Care Organizations. To define the MCC cohort we: (1) identified potential chronic conditions; (2) set criteria for cohort conditions based on MCC framework and measure concept; (3) applied the criteria informed by empirical analysis, experts, and the public; (4) described "broader" and "narrower" cohorts; and (5) selected final cohort with stakeholder input. Subjects were patients with chronic conditions. Participants included 21.8 million Medicare fee-for-service beneficiaries in 2012 aged 65 years and above with ≥1 of 27 Medicare Chronic Condition Warehouse condition(s). In total, 10 chronic conditions were identified based on our criteria; 8 of these 10 were associated with notably increased admission risk when co-occurring. A broader cohort (2+ of the 8 conditions) included 4.9 million beneficiaries (23% of total cohort) with an admission rate of 70 per 100 person-years. It captured 53% of total admissions. The narrower cohort (3+ conditions) had 2.2 million beneficiaries (10%) with 100 admissions per 100 person-years and captured 32% of admissions. Most stakeholders viewed the broader cohort as best aligned with the measure concept. By systematically narrowing chronic conditions to those most relevant to the outcome and incorporating stakeholder input, we defined an MCC admission measure cohort supported by stakeholders. This approach can be used as a model for other MCC outcome measures.

  19. Methodology for interpretation of fissile mass flow measurements

    International Nuclear Information System (INIS)

    March-Leuba, J.; Mattingly, J.K.; Mullens, J.A.

    1997-01-01

    This paper describes a non-intrusive measurement technique to monitor the mass flow rate of fissile material in gaseous or liquid streams. This fissile mass flow monitoring system determines the fissile mass flow rate by relying on two independent measurements: (1) a time delay along a given length of pipe, which is inversely proportional to the fissile material flow velocity, and (2) an amplitude measurement, which is proportional to the fissile concentration (e.g., grams of 235 U per length of pipe). The development of this flow monitor was first funded by DOE/NE in September 95, and initial experimental demonstration by ORNL was described in the 37th INMM meeting held in July 1996. This methodology was chosen by DOE/NE for implementation in November 1996; it has been implemented in hardware/software and is ready for installation. This paper describes the methodology used to interpret the data measured by the fissile mass flow monitoring system and the models used to simulate the transport of fission fragments from the source location to the detectors

  20. Methodology and boundary conditions applied to the analysis on internal flooding for Kozloduy NPP units 5 and 6

    International Nuclear Information System (INIS)

    Demireva, E.; Goranov, S.; Horstmann, R.

    2004-01-01

    Within the Modernization Program of Units 5 and 6 of Kozloduy NPP a comprehensive analysis of internal flooding has been carried out for the reactor building outside the containment and for the turbine hall by FRAMATOME ANP and ENPRO Consult. The objective of this presentation is to provide information on the applied methodology and boundary conditions. A separate report called 'Methodology and boundary conditions' has been elaborated in order to provide the fundament for the study. The methodology report provides definitions and advice for the following topics: scope of the study; safety objectives; basic assumptions and postulates (plant conditions, grace periods for manual actions, single failure postulate, etc.); sources of flooding (postulated piping leaks and ruptures, malfunctions and personnel error); main activities of the flooding analysis; study conclusions and suggestions of remedial measures. (authors)

  1. THE MEASUREMENT METHODOLOGY IMPROVEMENT OF THE HORIZONTAL IRREGULARITIES IN PLAN

    Directory of Open Access Journals (Sweden)

    O. M. Patlasov

    2015-08-01

    Full Text Available Purpose. Across the track superstructure (TSS there are structures where standard approach to the decision on the future of their operation is not entirely correct or acceptable. In particular, it concerns the track sections which are sufficiently quickly change their geometric parameters: the radius of curvature, angle of rotation, and the like. As an example, such portions of TSS may include crossovers where their component is within the so-called connecting part, which at a sufficiently short length, substantially changes curvature. The estimation of the position in terms of a design on the basis of the existing technique (by the difference in the adjacent arrows bending is virtually impossible. Therefore it is proposed to complement and improve the methodology for assessing the situation of the curve in plan upon difference in the adjacent versine. Methodology. The possible options for measuring horizontal curves in the plan were analyzed. The most adequate method, which does not contradict existing on the criterion of the possibility of using established standards was determined. The ease of measurement and calculation was took into account. Findings. Qualitative and quantitative verification of the proposed and existing methods showed very good agreement of the measurement results. This gives grounds to assert that this methodology can be recommended to the workers of track facilities in the assessment of horizontal irregularities in plan not only curves, but also within the connecting part of switch congresses. Originality. The existing method of valuation of the geometric position of the curves in the plan was improved. It does not create new regulations, and all results are evaluated by existing norms. Practical value. The proposed technique makes it possible, without creating a new regulatory framework, to be attached to existing one, and expanding the boundaries of its application. This method can be used not only for ordinary curves

  2. Indoor radon measurements and methodologies in Latin American countries

    International Nuclear Information System (INIS)

    Canoba, A.; Lopez, F.O.; Arnaud, M.I.; Oliveira, A.A.; Neman, R.S.; Hadler, J.C.; Iunes, P.J.; Paulo, S.R.; Osorio, A.M.; Aparecido, R.; Rodriguez, C.; Moreno, V.; Vasquez, R.; Espinosa, G.; Golzarri, J.I.; Martinez, T.; Navarrete, M.; Cabrera, I.; Segovia, N.; Pena, P.; Tamez, E.; Pereyra, P.; Lopez-Herrera, M.E.; Sajo-Bohus, L.

    2001-01-01

    According to the current international guidelines concerning environmental problems, it is necessary to evaluate and to know the indoor radon levels, specially since most of the natural radiation dose to man comes from radon gas and its progeny. Several countries have established National Institutions and National Programs for the study of radon and its connection with lung cancer risk and public health. The aim of this work is to present the indoor radon measurements and the detection methods used for different regions of Latin America (LA) in countries such as Argentina, Brazil, Ecuador, Mexico, Peru and Venezuela. This study shows that the passive radon devices based on alpha particle nuclear track methodology (NTM) is one of the more generalized methods in LA for long term indoor radon measurements, CR-39, LR-115 and Makrofol being the more commonly used detector materials. The participating institutions and the radon level measurements in the different countries are presented in this contribution

  3. Methodology for measurement in schools and kindergartens: experiences

    International Nuclear Information System (INIS)

    Fotjikova, I.; Navratilova Rovenska, K.

    2015-01-01

    In more than 1500 schools and preschool facilities, long-term radon measurement was carried out in the last 3 y. The negative effect of thermal retrofitting on the resulting long-term radon averages is evident. In some of the facilities, low ventilation rates and correspondingly high radon levels were found, so it was recommended to change ventilation habits. However, some of the facilities had high radon levels due to its ingress from soil gas. Technical measures should be undertaken to reduce radon exposure in this case. The paper presents the long-term experiences with the two-stage measurement methodology for investigation of radon levels in school and preschool facilities and its possible improvements. (authors)

  4. Determination of Critical Conditions for Puncturing Almonds Using Coupled Response Surface Methodology and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mahmood Mahmoodi-Eshkaftaki

    2013-01-01

    Full Text Available In this study, the effect of seed moisture content, probe diameter and loading velocity (puncture conditions on some mechanical properties of almond kernel and peeled almond kernel is considered to model a relationship between the puncture conditions and rupture energy. Furthermore, distribution of the mechanical properties is determined. The main objective is to determine the critical values of mechanical properties significant for peeling machines. The response surface methodology was used to find the relationship between the input parameters and the output responses, and the fitness function was applied to measure the optimal values using the genetic algorithm. Two-parameter Weibull function was used to describe the distribution of mechanical properties. Based on the Weibull parameter values, i.e. shape parameter (β and scale parameter (η calculated for each property, the mechanical distribution variations were completely described and it was confirmed that the mechanical properties are rule governed, which makes the Weibull function suitable for estimating their distributions. The energy model estimated using response surface methodology shows that the mechanical properties relate exponentially to the moisture, and polynomially to the loading velocity and probe diameter, which enabled successful estimation of the rupture energy (R²=0.94. The genetic algorithm calculated the critical values of seed moisture, probe diameter, and loading velocity to be 18.11 % on dry mass basis, 0.79 mm, and 0.15 mm/min, respectively, and optimum rupture energy of 1.97·10-³ J. These conditions were used for comparison with new samples, where the rupture energy was experimentally measured to be 2.68 and 2.21·10-³ J for kernel and peeled kernel, respectively, which was nearly in agreement with our model results.

  5. Relative Hazard and Risk Measure Calculation Methodology Rev 1

    International Nuclear Information System (INIS)

    Stenner, Robert D.; White, Michael K.; Strenge, Dennis L.; Aaberg, Rosanne L.; Andrews, William B.

    2000-01-01

    Documentation of the methodology used to calculate relative hazard and risk measure results for the DOE complex wide risk profiles. This methodology is used on major site risk profiles. In February 1997, the Center for Risk Excellence (CRE) was created and charged as a technical, field-based partner to the Office of Science and Risk Policy (EM-52). One of the initial charges to the CRE is to assist the sites in the development of ''site risk profiles.'' These profiles are to be relatively short summaries (periodically updated) that present a broad perspective on the major risk related challenges that face the respective site. The risk profiles are intended to serve as a high-level communication tool for interested internal and external parties to enhance the understanding of these risk-related challenges. The risk profiles for each site have been designed to qualitatively present the following information: (1) a brief overview of the site, (2) a brief discussion on the historical mission of the site, (3) a quote from the site manager indicating the site's commitment to risk management, (4) a listing of the site's top risk-related challenges, (5) a brief discussion and detailed table presenting the site's current risk picture, (6) a brief discussion and detailed table presenting the site's future risk reduction picture, and (7) graphic illustrations of the projected management of the relative hazards at the site. The graphic illustrations were included to provide the reader of the risk profiles with a high-level mental picture to associate with all the qualitative information presented in the risk profile. Inclusion of these graphic illustrations presented the CRE with the challenge of how to fold this high-level qualitative risk information into a system to produce a numeric result that would depict the relative change in hazard, associated with each major risk management action, so it could be presented graphically. This report presents the methodology developed

  6. Methodological considerations for measuring glucocorticoid metabolites in feathers

    Science.gov (United States)

    Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  7. Optimization of deposition conditions of CdS thin films using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Yücel, Ersin, E-mail: dr.ersinyucel@gmail.com [Department of Physics, Faculty of Arts and Sciences, Mustafa Kemal University, 31034 Hatay (Turkey); Güler, Nuray [Department of Physics, Faculty of Arts and Sciences, Mustafa Kemal University, 31034 Hatay (Turkey); Yücel, Yasin [Department of Chemistry, Faculty of Arts and Sciences, Mustafa Kemal University, 31034 Hatay (Turkey)

    2014-03-15

    Highlights: • Statistical methods used for optimization of CdS deposition parameters. • The morphology of the films was smooth, homogeneous and continuous. • Optimal conditions found as pH 11, stirring speed:361 rpm and deposition time: 55 min. • CdS thin film band gap value was 2.72 eV under the optimum conditions. -- Abstract: Cadmium sulfide (CdS) thin films were prepared on glass substrates by chemical bath deposition (CBD) technique under different pH, stirring speed and deposition time. Response Surface Methodology (RSM) and Central Composite Design (CCD) were used to optimization of deposition parameters of the CdS thin films. RSM and CCD were also used to understand the significance and interaction of the factors affecting the film quality. Variables were determined as pH, stirring speed and deposition time. The band gap was chosen as response in the study. Influences of the variables on the band gap and the film quality were investigated. 5-level-3-factor central composite design was employed to evaluate the effects of the deposition conditions parameters such as pH (10.2–11.8), stirring speed (132–468 rpm) and deposition time (33–67 min) on the band gap of the films. The samples were characterized using X-ray diffraction (XRD), scanning electron microscope (SEM) and ultraviolet–visible spectroscopy (UV–vis) measurements. The optimal conditions for the deposition parameters of the CdS thin films have been found to be: pH 11, 361 of stirring speed and 55 min of deposition time. Under the optimal conditions theoretical (predicted) band gap of CdS (2.66 eV) was calculated using optimal coded values from the model and the theoretical value is good agreement with the value (2.72 eV) obtained by verification experiment.

  8. Measurement of Quality of Life I. A Methodological Framework

    Directory of Open Access Journals (Sweden)

    Soren Ventegodt

    2003-01-01

    Full Text Available Despite the widespread acceptance of quality of life (QOL as the ideal guideline in healthcare and clinical research, serious conceptual and methodological problems continue to plague this area. In an attempt to remedy this situation, we propose seven criteria that a quality-of-life concept must meet to provide a sound basis for investigation by questionnaire. The seven criteria or desiderata are: (1 an explicit definition of quality of life; (2 a coherent philosophy of human life from which the definition is derived; (3 a theory that operationalizes the philosophy by specifying unambiguous, nonoverlapping, and jointly exhaustive questionnaire items; (4 response alternatives that permit a fraction-scale interpretation; (5 technical checks of reproducibility; (6 meaningfulness to investigators, respondents, and users; and (7 an overall aesthetic appeal of the questionnaire. These criteria have guided the design of a validated 5-item generic, global quality-of-life questionnaire (QOL5, and a validated 317-item generic, global quality-of-life questionnaire (SEQOL, administered to a well-documented birth cohort of 7,400 Danes born in 1959�1961, as well as to a reference sample of 2,500 Danes. Presented in outline, the underlying integrative quality-of-life (IQOL theory is a meta-theory. To illustrate the seven criteria at work, we show the extent to which they are satisfied by one of the eight component theories. Next, two sample results of our investigation are presented: satisfaction with one's sex life has the expected covariation with one's quality of life, and so does mother's smoking during pregnancy, albeit to a much smaller extent. It is concluded that the methodological framework presented has proved helpful in designing a questionnaire that is capable of yielding acceptably valid and reliable measurements of global and generic quality of life.

  9. Measurement of testosterone in human sexuality research: methodological considerations.

    Science.gov (United States)

    van Anders, Sari M; Goldey, Katherine L; Bell, Sarah N

    2014-02-01

    Testosterone (T) and other androgens are incorporated into an increasingly wide array of human sexuality research, but there are a number of issues that can affect or confound research outcomes. This review addresses various methodological issues relevant to research design in human studies with T; unaddressed, these issues may introduce unwanted noise, error, or conceptual barriers to interpreting results. Topics covered are (1) social and demographic factors (gender and sex; sexual orientations and sexual diversity; social/familial connections and processes; social location variables), (2) biological rhythms (diurnal variation; seasonality; menstrual cycles; aging and menopause), (3) sample collection, handling, and storage (saliva vs. blood; sialogogues, saliva, and tubes; sampling frequency, timing, and context; shipping samples), (4) health, medical issues, and the body (hormonal contraceptives; medications and nicotine; health conditions and stress; body composition, weight, and exercise), and (5) incorporating multiple hormones. Detailing a comprehensive set of important issues and relevant empirical evidence, this review provides a starting point for best practices in human sexuality research with T and other androgens that may be especially useful for those new to hormone research.

  10. Adaptability of laser diffraction measurement technique in soil physics methodology

    Science.gov (United States)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  11. Speciated arsenic in air: measurement methodology and risk assessment considerations.

    Science.gov (United States)

    Lewis, Ari S; Reid, Kim R; Pollock, Margaret C; Campleman, Sharan L

    2012-01-01

    Accurate measurement of arsenic (As) in air is critical to providing a more robust understanding of arsenic exposures and associated human health risks. Although there is extensive information available on total arsenic in air, less is known on the relative contribution of each arsenic species. To address this data gap, the authors conducted an in-depth review of available information on speciated arsenic in air. The evaluation included the type of species measured and the relative abundance, as well as an analysis of the limitations of current analytical methods. Despite inherent differences in the procedures, most techniques effectively separated arsenic species in the air samples. Common analytical techniques such as inductively coupled plasma mass spectrometry (ICP-MS) and/or hydride generation (HG)- or quartz furnace (GF)-atomic absorption spectrometry (AAS) were used for arsenic measurement in the extracts, and provided some of the most sensitive detection limits. The current analysis demonstrated that, despite limited comparability among studies due to differences in seasonal factors, study duration, sample collection methods, and analytical methods, research conducted to date is adequate to show that arsenic in air is mainly in the inorganic form. Reported average concentrations of As(III) and As(V) ranged up to 7.4 and 10.4 ng/m3, respectively, with As(V) being more prevalent than As(III) in most studies. Concentrations of the organic methylated arsenic compounds are negligible (in the pg/m3 range). However because of the variability in study methods and measurement methodology, the authors were unable to determine the variation in arsenic composition as a function of source or particulate matter (PM) fraction. In this work, the authors include the implications of arsenic speciation in air on potential exposure and risks. The authors conclude that it is important to synchronize sample collection, preparation, and analytical techniques in order to generate

  12. Phasor Measurement Unit under Interference Conditions

    DEFF Research Database (Denmark)

    Ghiga, Radu; Martin, Kenneth E.; Wu, Qiuwei

    2017-01-01

    interference condition scenarios. In the first scenario, noise is added to the PMU input signal. The test runs a sweep of Signalto-Noise Ratios (SNR) and the accuracy versus the noise level is obtained. The second scenario injects multiple harmonics with the input to test the influence on accuracy. The last...... scenario focuses on instrument transformer saturation which leads to a modified waveform injected in the PMU. This test goes through different levels of Current Transformer (CT) saturation and analyzes the effect of saturation on the accuracy of PMUs. The test results show PMU measurements will be degraded...

  13. Prediction of selectivity from morphological conditions: Methodology and a case study on cod (Gadus morhua)

    DEFF Research Database (Denmark)

    Herrmann, Bent; Krag, Ludvig Ahm; Frandsen, Rikke

    2009-01-01

    The FISHSELECT methodology. tools, and software were developed and used to measure the morphological parameters that determine the ability of cod to penetrate different mesh types, sizes, and openings. The shape of one cross-section at the cod's head was found to explain 97.6% of the mesh...

  14. Methodology for Measuring the Complexity of Enterprise Information Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-07-01

    Full Text Available The complexity of enterprise information systems is currently a challenge faced not only by IT professionals and project managers, but also by the users of such systems. Current methodologies and frameworks used to design and implement information systems do not specifically deal with the issue of their complexity and, apart from few exceptions, do not at all attempt to simplify the complexity. This article presents the author's own methodology for managing complexity, which can be used to complement any other methodology and which helps limit the growth of complexity. It introduces its own definition and metric of complexity, which it defines as the sum of entities of the individual UML models of the given system, which are selected according to the MMDIS methodology so as to consistently describe all relevant content dimensions of the system. The main objective is to propose a methodology to manage information system complexity and to verify it in practice on a real-life SAP implementation project.

  15. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    DEFF Research Database (Denmark)

    Smit Andersen, Jonas; Lerer, Sara Maria; Backhaus, Antje

    2017-01-01

    Local management of rainwater using stormwater control measures (SCMs) is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way...... of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses......; here we show its use for Danish conditions. We illustrate with a case study how CREs can be used in combination with a simple hydrological model to visualize where, how deep and for how long water is visible in a landscape designed to manage rainwater....

  16. Eddy correlation measurements in wet environmental conditions

    Science.gov (United States)

    Cuenca, R. H.; Migliori, L.; O Kane, J. P.

    2003-04-01

    The lower Feale catchment is a low-lying peaty area of 200 km^2 situated in southwest Ireland that is subject to inundation by flooding. The catchment lies adjacent to the Feale River and is subject to tidal signals as well as runoff processes. Various mitigation strategies are being investigated to reduce the damage due to flooding. Part of the effort has required development of a detailed hydrologic balance for the study area which is a wet pasture environment with local field drains that are typically flooded. An eddy correlation system was installed in the summer of 2002 to measure components of the energy balance, including evapotranspiration, along with special sensors to measure other hydrologic variables particular to this study. Data collected will be essential for validation of surface flux models to be developed for this site. Data filtering is performed using a combination of software developed by the Boundary-Layer Group (BLG) at Oregon State University together with modifications made to this system for conditions at this site. This automated procedure greatly reduces the tedious inspection of individual records. The package of tests, developed by the BLG for both tower and aircraft high frequency data, checks for electronic spiking, signal dropout, unrealistic magnitudes, extreme higher moment statistics, as well as other error scenarios not covered by the instrumentation diagnostics built into the system. Critical parameter values for each potential error were developed by applying the tests to real fast response turbulent time series. Potential instrumentation problems, flux sampling problems, and unusual physical situations records are flagged for removal or further analysis. A final visual inspection step is required to minimize rejection of physically unusual but real behavior in the time series. The problems of data management, data quality control, individual instrumentation sensitivity, potential underestimation of latent and sensible heat

  17. Measuring Instruments Control Methodology Performance for Analog Electronics Remote Labs

    Directory of Open Access Journals (Sweden)

    Unai Hernandez-Jayo

    2012-12-01

    Full Text Available This paper presents the work that has been developed in parallel to the VISIR project. The objective of this paper is to present the results of the validations processes that have been carried out to check the control methodology. This method has been developed with the aim of being independent of the instruments of the labs.

  18. Scanner image methodology (SIM) to measure dimensions of leaves ...

    African Journals Online (AJOL)

    A scanner image methodology was used to determine plant dimensions, such as leaf area, length and width. The values obtained using SIM were compared with those recorded by the LI-COR leaf area meter. Bias, linearity, reproducibility and repeatability (R&R) were evaluated for SIM. Different groups of leaves were ...

  19. Development and Attestation of Gamma-Ray Measurement Methodologies for use by Rostekhnadzor Inspectors in the Russian Federation

    International Nuclear Information System (INIS)

    Jeff Sanders

    2006-01-01

    Development and attestation of gamma-ray non-destructive assay measurement methodologies for use by inspectors of the Russian Federal Service for Environmental, Technological, and Nuclear Oversight (Rostekhnadzor, formerly Gosatomnadzor or GAN), as well as for use by Russian nuclear facilities, has been completed. Specifically, a methodology utilizing the gamma-ray multi group analysis (MGA) method for determining plutonium isotopic composition has been developed, while existing methodologies to determining uranium enrichment and isotopic composition have been revised to make them more appropriate to the material types and conditions present in nuclear facilities in the Russian Federation. This paper will discuss the development and revision of these methodologies, the metrological characteristics of the final methodologies, as well as the limitations and concerns specific to the utilization of these analysis methods in the Russian Federation

  20. Conditional stability for a single interior measurement

    International Nuclear Information System (INIS)

    Honda, Naofumi; McLaughlin, Joyce; Nakamura, Gen

    2014-01-01

    An inverse problem to identify unknown coefficients of a partial differential equation by a single interior measurement is considered. The equation considered in this paper is a strongly elliptic second order scalar equation which can have complex coefficients in a bounded domain with C 2 boundary. We are given a single interior measurement. This means that we know a given solution of the forward equation in this domain. The equation includes some model equations arising from acoustics, viscoelasticity and hydrology. We assume that the coefficients are piecewise analytic. Our major result is the local Hölder stability estimate for identifying the unknown coefficients. If the unknown coefficient is a complex coefficient in the principal part of the equation, we assumed a condition which we name admissibility assumption for the real part and imaginary part of the difference of two complex coefficients. This admissibility assumption is automatically satisfied if the complex coefficients are real valued. For identifying either the real coefficient in the principal part or the coefficient of the 0th order of the equation, the major result implies global uniqueness for the identification. (paper)

  1. Theoretical and methodological reasoning of correction technologies of the physical conditions of students of music speciality

    Directory of Open Access Journals (Sweden)

    Petro Marynchuk

    2017-08-01

    Full Text Available The article emphasizes the lack of development of the methodological basis for the physical education of students of Music Arts. Professionally dependent indicators of physical condition were taken into account. The article also outlines the main theoretical and methodological provisions that underlie the development of technology for correction of the physical condition of students of music arts. They are in particular actualization of life-giving motivation of students to increase the level of physical condition, regular physical exercises, the need for the development of professionally important physical qualities, ensuring the differentiation of physical activity, taking into account the level of physical state and physical conditions of students of Music Arts. The structure of the technology of correction of the physical condition of students of Music Arts is considered. The technology contains the purpose, tasks, principles, stages of implementation, the program with the use of physical culture, performance criteria. The main stages of the technology implementation – preparatory, main, final – are analyzed. The means of motor activity of innovative direction are described for use in the practice of higher educational institutions, which take into account the features of the student staff, their mode of educational activity.

  2. Measuring the human psychophysiological conditions without contact

    Science.gov (United States)

    Scalise, L.; Casacanditella, L.; Cosoli, G.

    2017-08-01

    Heart Rate Variability, HRV, studies the variations of cardiac rhythm caused by the autonomic regulation. HRV analysis can be applied to the study of the effects of mental or physical stressors on the psychophysiological conditions. The present work is a pilot study performed on a 23-year-old healthy subject. The measurement of HRV was performed by means of two sensors, that is an electrocardiograph and a Laser Doppler Vibrometer, which is a non-contact device able to detect the skin vibrations related to the cardiac activity. The present study aims to evaluate the effects of a physical task on HRV parameters (in both time and frequency domain), and consequently on the autonomic regulation, and the capability of Laser Doppler Vibrometry in correctly detecting the effects of stress on the Heart Variability. The results show a significant reduction of HRV parameters caused by the execution of the physical task (i.e. variations of 25-40% for parameters in time domain, also higher in frequency domain); this is consistent with the fact that stress causes a reduced capability of the organism in varying the Heart Rate (and, consequently, a limited HRV). LDV was able to correctly detect this phenomenon in the time domain, while the parameters in the frequency domain show significant deviations with respect to the gold standard technique (i.e. ECG). This may be due to the movement artefacts that have consistently modified the shape of the vibration signal measured by means of LDV, after having performed the physical task. In the future, in order to avoid this drawback, the LDV technique could be used to evaluate the effects of a mental task on HRV signals (i.e. the evaluation of mental stress).

  3. Methodology for determining time-dependent mechanical properties of tuff subjected to near-field repository conditions

    International Nuclear Information System (INIS)

    Blacic, J.D.; Andersen, R.

    1983-01-01

    We have established a methodology to determine the time dependence of strength and transport properties of tuff under conditions appropriate to a nuclear waste repository. Exploratory tests to determine the approximate magnitudes of thermomechanical property changes are nearly complete. In this report we describe the capabilities of an apparatus designed to precisely measure the time-dependent deformation and permeability of tuff at simulated repository conditions. Preliminary tests with this new apparatus indicate that microclastic creep failure of tuff occurs over a narrow strain range with little precursory Tertiary creep behavior. In one test, deformation under conditions of slowly decreasing effective pressure resulted in failure, whereas some strain indicators showed a decreasing rate of strain

  4. Methodology for determining time-dependent mechanical properties of tuff subjected to near-field repository conditions

    Energy Technology Data Exchange (ETDEWEB)

    Blacic, J.D.; Andersen, R.

    1983-01-01

    We have established a methodology to determine the time dependence of strength and transport properties of tuff under conditions appropriate to a nuclear waste repository. Exploratory tests to determine the approximate magnitudes of thermomechanical property changes are nearly complete. In this report we describe the capabilities of an apparatus designed to precisely measure the time-dependent deformation and permeability of tuff at simulated repository conditions. Preliminary tests with this new apparatus indicate that microclastic creep failure of tuff occurs over a narrow strain range with little precursory Tertiary creep behavior. In one test, deformation under conditions of slowly decreasing effective pressure resulted in failure, whereas some strain indicators showed a decreasing rate of strain.

  5. Improvement of the Assignment Methodology of the Approach Embankment Design to Highway Structures in Difficult Conditions

    Science.gov (United States)

    Chistyy, Y.; Kuzakhmetova, E.; Fazilova, Z.; Tsukanova, O.

    2018-03-01

    Design issues of junction of bridges and overhead road with approach embankment are studied. The reasons for the formation of deformations in the road structure are indicated. Activities to ensure sustainability and acceleration of the shrinkage of a weak subgrade approach embankment are listed. The necessity of taking into account the man-made impact of the approach embankment on the subgrade behavior is proved. Modern stabilizing agents to improve the properties of used soils in the embankment and the subgrade are suggested. Clarified methodology for determining an active zone of compression in the subgrade under load from the weight of the embankment is described. As an additional condition to the existing methodology for establishing the lower bound of the active zone of compression it is offered to accept the accuracy of evaluation of soil compressibility and determine shrinkage.

  6. Response Surface Methodology: An Extensive Potential to Optimize in vivo Photodynamic Therapy Conditions

    International Nuclear Information System (INIS)

    Tirand, Loraine; Bastogne, Thierry; Bechet, Denise M.Sc.; Linder, Michel; Thomas, Noemie; Frochot, Celine; Guillemin, Francois; Barberi-Heyob, Muriel

    2009-01-01

    Purpose: Photodynamic therapy (PDT) is based on the interaction of a photosensitizing (PS) agent, light, and oxygen. Few new PS agents are being developed to the in vivo stage, partly because of the difficulty in finding the right treatment conditions. Response surface methodology, an empirical modeling approach based on data resulting from a set of designed experiments, was suggested as a rational solution with which to select in vivo PDT conditions by using a new peptide-conjugated PS targeting agent, neuropilin-1. Methods and Materials: A Doehlert experimental design was selected to model effects and interactions of the PS dose, fluence, and fluence rate on the growth of U87 human malignant glioma cell xenografts in nude mice, using a fixed drug-light interval. All experimental results were computed by Nemrod-W software and Matlab. Results: Intrinsic diameter growth rate, a tumor growth parameter independent of the initial volume of the tumor, was selected as the response variable and was compared to tumor growth delay and relative tumor volumes. With only 13 experimental conditions tested, an optimal PDT condition was selected (PS agent dose, 2.80 mg/kg; fluence, 120 J/cm 2 ; fluence rate, 85 mW/cm 2 ). Treatment of glioma-bearing mice with the peptide-conjugated PS agent, followed by the optimized PDT condition showed a statistically significant improvement in delaying tumor growth compared with animals who received the PDT with the nonconjugated PS agent. Conclusions: Response surface methodology appears to be a useful experimental approach for rapid testing of different treatment conditions and determination of optimal values of PDT factors for any PS agent.

  7. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    Directory of Open Access Journals (Sweden)

    Jonas Smit Andersen

    2017-10-01

    Full Text Available Local management of rainwater using stormwater control measures (SCMs is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses quantitative and statistical methods to select Characteristic Rain Events (CREs for a range of frequent return periods: weekly, bi-weekly, monthly, bi-monthly, and a single rarer event occurring only every 1–10 years. The methodology for selecting CREs is flexible and can be adjusted to any climatic settings; here we show its use for Danish conditions. We illustrate with a case study how CREs can be used in combination with a simple hydrological model to visualize where, how deep and for how long water is visible in a landscape designed to manage rainwater.

  8. A methodology for hard/soft information fusion in the condition monitoring of aircraft

    Science.gov (United States)

    Bernardo, Joseph T.

    2013-05-01

    Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery with electronic sensors that continuously monitor components and collect data for analysis. The addition of the multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation, coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition. Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore, observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem detection capability, improve flight safety, and increase mission readiness. This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of concept, which will support development of activities to demonstrate hard/soft information fusion in operationally relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a prototype.

  9. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  10. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods

  11. New methodology of measurement the unsteady thermal cooling of objects

    Science.gov (United States)

    Winczek, Jerzy

    2018-04-01

    The problems of measurements of unsteady thermal turbulent flow affect a many of domains, such as heat energy, manufacturing technologies, and many others. The subject of the study is focused on the analysis of current state of the problem, overview of the design solutions and methods to measure non-stationary thermal phenomena, presentation, and choice of adequate design of the cylinder, development of the method to measure and calculate basic values that characterize the process of heat exchange on the model surface.

  12. OPTIMIZATION OF MICROWAVE AND AIR DRYING CONDITIONS OF QUINCE (CYDONIA OBLONGA, MILLER USING RESPONSE SURFACE METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Cem Baltacioglu

    2015-03-01

    Full Text Available Effects of slice thickness of quince (Cydonia oblonga Miller , microwave incident power and air drying temperature on antioxidant activity and total phenolic content of quince were investigated during drying in microwave and air drying. Optimum conditions were found to be: i for microwave drying, 285 W and 4.14 mm thick (maximum antioxidant activity and 285 W and 6.85 mm thick (maximum total phenolic content, and ii for air drying, 75 ºC and 1.2 mm thick (both maximum antioxidant activity and total phenolic content. Drying conditions were optimized by using the response surface methodology. 13 experiments were carried out considering incident microwave powers from 285 to 795 W, air temperature from 46 to 74 ºC and slice thickness from 1.2 to 6.8 mm.

  13. Measuring service line competitive position. A systematic methodology for hospitals.

    Science.gov (United States)

    Studnicki, J

    1991-01-01

    To mount a broad effort aimed at improving their competitive position for some service or group of services, hospitals have begun to pursue product line management techniques. A few hospitals have even reorganized completely under the product line framework. The benefits include focusing accountability for operations and results, facilitating coordination between departments and functions, stimulating market segmentation, and promoting rigorous examination of new and existing programs. As part of its strategic planning process, a suburban Baltimore hospital developed a product line management methodology with six basic steps: (1) define the service lines (which they did by grouping all existing diagnosis-related groups into 35 service lines), (2) determine the contribution of each service line to total inpatient volume, (3) determine trends in service line volumes (by comparing data over time), (4) derive a useful comparison group (competing hospitals or groups of hospitals with comparable size, scope of services, payer mix, and financial status), (5) review multiple time frames, and (6) summarize the long- and short-term performance of the hospital's service lines to focus further analysis. This type of systematic and disciplined analysis can become part of a permanent strategic intelligence program. When hospitals have such a program in place, their market research, planning, budgeting, and operations will be tied together in a true management decision support system.

  14. Conditioning Methodologies for DanceSport: Lessons from Gymnastics, Figure Skating, and Concert Dance Research.

    Science.gov (United States)

    Outevsky, David; Martin, Blake Cw

    2015-12-01

    Dancesport, the competitive branch of ballroom dancing, places high physiological and psychological demands on its practitioners, but pedagogical resources in these areas for this dance form are limited. Dancesport competitors could benefit from strategies used in other aesthetic sports. In this review, we identify conditioning methodologies from gymnastics, figure skating, and contemporary, modern, and ballet dance forms that could have relevance and suitability for dancesport training, and propose several strategies for inclusion in the current dancesport curriculum. We reviewed articles derived from Google Scholar, PubMed, ScienceDirect, Taylor & Francis Online, and Web of Science search engines and databases, with publication dates from 1979 to 2013. The keywords included MeSH terms: dancing, gymnastics, physiology, energy metabolism, physical endurance, and range of motion. Out of 47 papers examined, 41 papers met the inclusion criteria (validity of scientific methods, topic relevance, transferability to dancesport, publication date). Quality and validity of the data were assessed by examining the methodologies in each study and comparing studies on similar populations as well as across time using the PRISMA 2009 checklist and flowchart. The relevant research suggests that macro-cycle periodization planning, aerobic and anaerobic conditioning, range of motion and muscular endurance training, and performance psychology methods have potential for adaptation for dancesport training. Dancesport coaches may help their students fulfill their ambitions as competitive athletes and dance artists by adapting the relevant performance enhancement strategies from gymnastics, figure skating, and concert dance forms presented in this paper.

  15. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  16. Determination of Optimum Condition of Leucine Content in Beef Protein Hydrolysate using Response Surface Methodology

    International Nuclear Information System (INIS)

    Siti Roha Ab Mutalib; Zainal Samicho; Noriham Abdullah

    2016-01-01

    The aim of this study is to determine the optimum condition of leucine content in beef hydrolysate. Beef hydrolysate was prepared by enzymatic hydrolysis using bromelain enzyme produced from pineapple peel. Parameter conditions such as concentration of bromelain, hydrolysis temperature and hydrolysis time were assessed to obtain the optimum leucine content of beef hydrolysate according to experimental design which was recommended by response surface methodology (RSM). Leucine content in beef hydrolysate was determined using AccQ. Tag amino acid analysis method using high performance liquid chromatography (HPLC). The condition of optimum leucine content was at bromelain concentration of 1.38 %, hydrolysis temperature of 42.5 degree Celcius and hydrolysis time of 31.59 hours with the predicted leucine content of 26.57 %. The optimum condition was verified with the leucine value obtained was 26.25 %. Since there was no significant difference (p>0.05) between the predicted and verified leucine values, thus it indicates that the predicted optimum condition by RSM can be accepted to predict the optimum leucine content in beef hydrolysate. (author)

  17. Aqueduct: a methodology to measure and communicate global water risks

    Science.gov (United States)

    Gassert, Francis; Reig, Paul

    2013-04-01

    , helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.

  18. Causality analysis in business performance measurement system using system dynamics methodology

    Science.gov (United States)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  19. Measurement methodology of natural radioactivity in the thermal establishments

    International Nuclear Information System (INIS)

    Ameon, R.; Robe, M.C.

    2004-11-01

    The thermal baths have been identified as an activity susceptible to expose to ionizing radiations the workers through the natural sources of radon and radon 220. The new regulation obliges these facilities to realize radioactivity measurements. The principal ways of exposure are radon and its daughters inhalation,, exposure to gamma radiation, ingestion of radioelements in thermal waters. I.R.S.N. proposes two methods of measurements of the natural radioactivity in application to the regulation relative to the protection of persons and workers. Some principles to reduce exposure to radon are reminded. (N.C.)

  20. Laser scattering methodology for measuring particulates in the air

    Directory of Open Access Journals (Sweden)

    Carlo Giglioni

    2009-03-01

    Full Text Available A description is given of the laser scattering method to measure PM10, PM2.5 and PM1 dusts in confirmed environments (museums, libraries, archives, art galleries, etc.. Such equipment presents many advantages, in comparison with those which are actually in use, not only from an analytic but also from a functional point of view.

  1. Sensible organizations: technology and methodology for automatically measuring organizational behavior.

    Science.gov (United States)

    Olguin Olguin, Daniel; Waber, Benjamin N; Kim, Taemie; Mohan, Akshay; Ara, Koji; Pentland, Alex

    2009-02-01

    We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks.

  2. Methodological Issues in Measures of Imitative Reaction Times

    Science.gov (United States)

    Aicken, Michael D.; Wilson, Andrew D.; Williams, Justin H. G.; Mon-Williams, Mark

    2007-01-01

    Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to…

  3. Investigation of an Error Theory for Conjoint Measurement Methodology.

    Science.gov (United States)

    1983-05-01

    1ybren, 1982; Srinivasan and Shocker, 1973a, 1973b; Ullrich =d Cumins , 1973; Takane, Young, and de Leeui, 190C; Yount,, 1972’. & OEM...procedures as a diagnostic tool. Specifically, they used the oompted STRESS - value and a measure of fit they called PRECAP that could be obtained

  4. Flux Measurements in Trees: Methodological Approach and Application to Vineyards

    Directory of Open Access Journals (Sweden)

    Francesca De Lorenzi

    2008-03-01

    Full Text Available In this paper a review of two sap flow methods for measuring the transpiration in vineyards is presented. The objective of this work is to examine the potential of detecting transpiration in trees in response to environmental stresses, particularly the high concentration of ozone (O3 in troposphere. The methods described are the stem heat balance and the thermal dissipation probe; advantages and disadvantages of each method are detailed. Applications of both techniques are shown, in two large commercial vineyards in Southern Italy (Apulia and Sicily, submitted to semi-arid climate. Sap flow techniques allow to measure transpiration at plant scale and an upscaling procedure is necessary to calculate the transpiration at the whole stand level. Here a general technique to link the value of transpiration at plant level to the canopy value is presented, based on experimental relationships between transpiration and biometric characteristics of the trees. In both vineyards transpiration measured by sap flow methods compares well with evapotranspiration measured by micrometeorological techniques at canopy scale. Moreover soil evaporation component has been quantified. In conclusion, comments about the suitability of the sap flow methods for studying the interactions between trees and ozone are given.

  5. The relevance of segments reports – measurement methodology

    Directory of Open Access Journals (Sweden)

    Tomasz Zimnicki

    2017-09-01

    Full Text Available The segment report is one of the areas of financial statements, and it obliges a company to provide infor-mation about the economic situation in each of its activity areas. The article evaluates the change of segment reporting standards from IAS14R to IFRS8 in the context of feature relevance. It presents the construction of a measure which allows the relevance of segment disclosures to be determined. The created measure was used to study periodical reports published by companies listed on the main market of the Warsaw Stock Exchange from three reporting periods – 2008, 2009 and 2013. Based on the re-search results, it was found that the change of segment reporting standards from IAS14R to IFRS8 in the context of relevance was legitimate.

  6. Model development and optimization of operating conditions to maximize PEMFC performance by response surface methodology

    International Nuclear Information System (INIS)

    Kanani, Homayoon; Shams, Mehrzad; Hasheminasab, Mohammadreza; Bozorgnezhad, Ali

    2015-01-01

    Highlights: • The optimization of the operating parameters in a serpentine PEMFC is done using RSM. • The RSM model can predict the cell power over the wide range of operating conditions. • St-An, St-Ca and RH-Ca have an optimum value to obtain the best performance. • The interactions of the operating conditions affect the output power significantly. • The cathode and anode stoichiometry are the most effective parameters on the power. - Abstract: Optimization of operating conditions to obtain maximum power in PEMFCs could have a significant role to reduce the costs of this emerging technology. In the present experimental study, a single serpentine PEMFC is used to investigate the effects of operating conditions on the electrical power production of the cell. Four significant parameters including cathode stoichiometry, anode stoichiometry, gases inlet temperature, and cathode relative humidity are studied using Design of Experiment (DOE) to obtain an optimal power. Central composite second order Response Surface Methodology (RSM) is used to model the relationship between goal function (power) and considered input parameters (operating conditions). Using this statistical–mathematical method leads to obtain a second-order equation for the cell power. This model considers interactions and quadratic effects of different operating conditions and predicts the maximum or minimum power production over the entire working range of the parameters. In this range, high stoichiometry of cathode and low stoichiometry of anode results in the minimum cell power and contrary the medium range of fuel and oxidant stoichiometry leads to the maximum power. Results show that there is an optimum value for the anode stoichiometry, cathode stoichiometry and relative humidity to reach the best performance. The predictions of the model are evaluated by experimental tests and they are in a good agreement for different ranges of the parameters

  7. Spectral reflectance measurement methodologies for TUZ Golu field campaign

    CSIR Research Space (South Africa)

    Boucher, Y

    2011-07-01

    Full Text Available panel. However, it's possible to take this into account in the uncertainty budget. 2.2. Instrumentation and sampling area All of the teams except INPE used a Fieldspec ASD spectroradiometer. In this case, the user has to choose the aperture... of the objective and the ASD configuration (the number of elementary spectra averaged to get one measurement, here typically 10, and the number of dark current acquisitions, here typically 25). The spectroradiometer must also be optimized from time to time...

  8. OPTIMIZATION OF PRETREATMENT CONDITIONS OF CARROTS TO MAXIMIZE JUICE RECOVERY BY RESPONSE SURFACE METHODOLOGY

    Directory of Open Access Journals (Sweden)

    H. K. SHARMA

    2006-12-01

    Full Text Available Carrot juice was expressed in a hydraulic press using a wooden set up. Carrot samples pretreated at different designed combinations, using Central Composite Rotatable Design (CCRD, Response Surface Methodology (RSM, of pH, temperature and time were expressed and juice so obtained was characterized for various physico-chemical parameters which involved yield, TSS and water content, reducing sugars, total sugars and color (absorbance. The study indicated that carrots exposed to the different pretreatment conditions resulted in increased amount of yield than that of the control. The responses were optimized by numerical method and were found to be 78.23% yield, 0.93% color (abs, 3.41% reducing sugars, 5.53% total sugars, 6.69obrix, and 90.50% water content. All the derived mathematical models for the various responses were found to be fit significantly to predict the data.

  9. Methodological issues in measures of imitative reaction times.

    Science.gov (United States)

    Aicken, Michael D; Wilson, Andrew D; Williams, Justin H G; Mon-Williams, Mark

    2007-04-01

    Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to spatial cues. In an attempt to replicate these findings, we manipulated the salience of the visual cue and found that we could reverse the advantage of the imitative cue over the spatial cue. We suggest that participants utilised a simple visuomotor mechanism to perform all aspects of this task, with performance being driven by the relative visual salience of the stimuli. Imitation is a more complex motor skill that would constitute an inefficient strategy for rapid performance.

  10. [Measuring nursing care times--methodologic and documentation problems].

    Science.gov (United States)

    Bartholomeyczik, S; Hunstein, D

    2001-08-01

    The time for needed nursing care is one important measurement as a basic for financing care. In Germany the Long Term Care Insurance (LTCI) reimburses nursing care depending on the time family care givers need to complete selected activities. The LTCI recommends certain time ranges for these activities, which are wholly compensatory, as a basic for assessment. The purpose is to enhance assessment justice and comparability. With the example of a German research project, which had to investigate the duration of these activities and the reasons for differences, questions are raised about some definition and interpretation problems. There are definition problems, since caring activities especially in private households are nearly never performed as clearly defined modules. Moreover, often different activities are performed simultaneously. However, the most important question is what exactly time numbers can say about the essentials of nursing care.

  11. Methodological NMR imaging developments to measure cerebral perfusion

    International Nuclear Information System (INIS)

    Pannetier, N.

    2010-12-01

    This work focuses on acquisition techniques and physiological models that allow characterization of cerebral perfusion by MRI. The arterial input function (AIF), on which many models are based, is measured by a technique of optical imaging at the carotid artery in rats. The reproducibility and repeatability of the AIF are discussed and a model function is proposed. Then we compare two techniques for measuring the vessel size index (VSI) in rats bearing a glioma. The reference technique, using a USPIO contrast agent (CA), faces the dynamic approach that estimates this parameter during the passage of a bolus of Gd. This last technique has the advantage of being used clinically. The results obtained at 4.7 T by both approaches are similar and use of VSI in clinical protocols is strongly encouraged at high field. The mechanisms involved (R1 and R2* relaxivities) were then studied using a multi gradient -echoes approach. A multi-echoes spiral sequence is developed and a method that allows the refocusing between each echo is presented. This sequence is used to characterize the impact of R1 effects during the passage of two successive injections of Gd. Finally, we developed a tool for simulating the NMR signal on a 2D geometry taking into account the permeability of the BBB and the CA diffusion in the interstitial space. At short TE, the effect of diffusion on the signal is negligible. In contrast, the effects of diffusion and permeability may be separated at long echo time. Finally we show that during the extravasation of the CA, the local magnetic field homogenization due to the decrease of the magnetic susceptibility difference at vascular interfaces is quickly balanced by the perturbations induced by the increase of the magnetic susceptibility difference at the cellular interfaces in the extravascular compartment. (author)

  12. Changing methodology for measuring airborne radioactive discharges from nuclear facilities

    International Nuclear Information System (INIS)

    Glissmeyer, J.A.; Ligotke, M.W.

    1995-05-01

    The US Environmental Protection Agency (USEPA) requires that measurements of airborne radioactive discharges from nuclear facilities be performed following outdated methods contained in the American National Standards Institute (ANSI) N13.1-1969 Guide to Sampling Airborne Radioactive Materials in Nuclear Facilities. Improved methods are being introduced via two paths. First, the ANSI standard is being revised, and second, EPA's equivalency granting process is being used to implement new technology on a case-by-case or broad basis. The ANSI standard is being revised by a working group under the auspices of the Health Physics Society Standards Committee. The revised standard includes updated methods based on current technology and a performance-based approach to design. The performance-based standard will present new challenges, especially in the area of performance validation. Progress in revising the standard is discussed. The US Department of Energy recently received approval from the USEPA for an alternate approach to complying with air-sampling regulations. The alternate approach is similar to the revised ANSI standard. New design tools include new types of sample extraction probes and a model for estimating line-losses for particles and radioiodine. Wind tunnel tests are being performed on various sample extraction probes for use at small stacks. The data show that single-point sampling probes are superior to ANSI-Nl3.1-1969 style multiple-point sample extraction probes

  13. Measurement of plasma adenosine concentration: methodological and physiological considerations

    International Nuclear Information System (INIS)

    Gewirtz, H.; Brown, P.; Most, A.S.

    1987-01-01

    This study tested the hypothesis that measurements of plasma adenosine concentration made on samples of blood obtained in dipyridamole and EHNA (i.e., stopping solution) may be falsely elevated as a result of ongoing in vitro production and accumulation of adenosine during sample processing. Studies were performed with samples of anticoagulated blood obtained from anesthesized domestic swine. Adenosine concentration of ultra filtrated plasma was determined by HPLC. The following parameters were evaluated: (i) rate of clearance of [ 3 H]adenosine added to plasma, (ii) endogenous adenosine concentration of matched blood samples obtained in stopping solution alone, stopping solution plus EDTA, and perchloric acid (PCA), (iii) plasma and erythrocyte endogenous adenosine concentration in nonhemolyzed samples, and (iv) plasma adenosine concentration of samples hemolyzed in the presence of stopping solution alone or stopping solution plus EDTA. We observed that (i) greater than or equal to 95% of [ 3 H]adenosine added to plasma is removed from it by formed elements of the blood in less than 20 s, (ii) plasma adenosine concentration of samples obtained in stopping solution alone is generally 10-fold greater than that of matched samples obtained in stopping solution plus EDTA, (iii) deliberate mechanical hemolysis of blood samples obtained in stopping solution alone resulted in substantial augmentation of plasma adenosine levels in comparison with matched nonhemolyzed specimens--addition of EDTA to stopping solution prevented this, and (iv) adenosine content of blood samples obtained in PCA agreed closely with the sum of plasma and erythrocyte adenosine content of samples obtained in stopping solution plus EDTA

  14. VAR Methodology Used for Exchange Risk Measurement and Prevention

    Directory of Open Access Journals (Sweden)

    Florentina Balu

    2006-05-01

    Full Text Available In this article we discuss one of the modern risk measuring techniques Value-at-Risk (VaR. Currently central banks in major money centers, under the auspices of the BIS Basle Committee, adopt the VaR system to evaluate the market risk of their supervised banks. Banks regulators ask all commercial banks to report VaRs with their internal models. Value at risk (VaR is a powerful tool for assessing market risk, but it also imposes a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolio’s market value. Value at Risk (VAR calculates the maximum loss expected (or worst case scenario on an investment, over a given time period and given a specified degree of confidence. There are three methods by which VaR can be calculated: the historical simulation, the variance-covariance method and the Monte Carlo simulation. The variance-covariance method is easiest because you need to estimate only two factors: average return and standard deviation. However, it assumes returns are well-behaved according to the symmetrical normal curve and that historical patterns will repeat into the future. The historical simulation improves on the accuracy of the VAR calculation, but requires more computational data; it also assumes that “past is prologue”. The Monte Carlo simulation is complex, but has the advantage of allowing users to tailor ideas about future patterns that depart from historical patterns.

  15. VAR Methodology Used for Exchange Risk Measurement and Prevention

    Directory of Open Access Journals (Sweden)

    Ion Stancu

    2006-03-01

    Full Text Available In this article we discuss one of the modern risk measuring techniques Value-at-Risk (VaR. Currently central banks in major money centers, under the auspices of the BIS Basle Committee, adopt the VaR system to evaluate the market risk of their supervised banks. Banks regulators ask all commercial banks to report VaRs with their internal models. Value at risk (VaR is a powerful tool for assessing market risk, but it also imposes a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolio’s market value. Value at Risk (VAR calculates the maximum loss expected (or worst case scenario on an investment, over a given time period and given a specified degree of confidence. There are three methods by which VaR can be calculated: the historical simulation, the variance-covariance method and the Monte Carlo simulation. The variance-covariance method is easiest because you need to estimate only two factors: average return and standard deviation. However, it assumes returns are well-behaved according to the symmetrical normal curve and that historical patterns will repeat into the future. The historical simulation improves on the accuracy of the VAR calculation, but requires more computational data; it also assumes that “past is prologue”. The Monte Carlo simulation is complex, but has the advantage of allowing users to tailor ideas about future patterns that depart from historical patterns.

  16. Methodologies for Measuring Judicial Performance: The Problem of Bias

    Directory of Open Access Journals (Sweden)

    Jennifer Elek

    2014-12-01

    Full Text Available Concerns about gender and racial bias in the survey-based evaluations of judicial performance common in the United States have persisted for decades. Consistent with a large body of basic research in the psychological sciences, recent studies confirm that the results from these JPE surveys are systematically biased against women and minority judges. In this paper, we explain the insidious manner in which performance evaluations may be biased, describe some techniques that may help to reduce expressions of bias in judicial performance evaluation surveys, and discuss the potential problem such biases may pose in other common methods of performance evaluation used in the United States and elsewhere. We conclude by highlighting the potential adverse consequences of judicial performance evaluation programs that rely on biased measurements. Durante décadas ha habido una preocupación por la discriminación por género y racial en las evaluaciones del rendimiento judicial basadas en encuestas, comunes en Estados Unidos. De acuerdo con un gran corpus de investigación básica en las ciencias psicológicas, estudios recientes confirman que los resultados de estas encuestas de evaluación del rendimiento judicial están sistemáticamente sesgados contra las mujeres y los jueces de minorías. En este artículo se explica la manera insidiosa en que las evaluaciones de rendimiento pueden estar sesgadas, se describen algunas técnicas que pueden ayudar a reducir las expresiones de sesgo en los estudios de evaluación del rendimiento judicial, y se debate el problema potencial que estos sesgos pueden plantear en otros métodos comunes de evaluación del rendimiento utilizados en Estados Unidos y otros países. Se concluye destacando las posibles consecuencias adversas de los programas de evaluación del rendimiento judicial que se basan en mediciones sesgadas. DOWNLOAD THIS PAPER FROM SSRN: http://ssrn.com/abstract=2533937

  17. Statistical optimization for alkali pretreatment conditions of narrow-leaf cattail by response surface methodology

    Directory of Open Access Journals (Sweden)

    Arrisa Ruangmee

    2013-08-01

    Full Text Available Response surface methodology with central composite design was applied to optimize alkali pretreatment of narrow-leafcattail (Typha angustifolia. Joint effects of three independent variables; NaOH concentration (1-5%, temperature (60-100 ºC,and reaction time (30-150 min, were investigated to evaluate the increase in and the improvement of cellulosic componentscontained in the raw material after pretreatment. The combined optimum condition based on the cellulosic content obtainedfrom this study is: a concentration of 5% NaOH, a reaction time of 120 min, and a temperature of 100 ºC. This result has beenanalyzed employing ANOVA with a second order polynomial equation. The model was found to be significant and was able topredict accurately the response of strength at less than 5% error. Under this combined optimal condition, the desirable cellulosic content in the sample increased from 38.5 to 68.3%, while the unfavorable hemicellulosic content decreased from 37.6 to7.3%.

  18. Verification of dosimetric methodology for auditing radiotherapy quality under non-reference condition in Hubei province

    International Nuclear Information System (INIS)

    Ma Xinxing; Luo Suming; He Zhijian; Zhou Wenshan

    2014-01-01

    Objective: To verify the reliability of TLD-based quality audit for radiotherapy dosimetry of medical electron accelerator in non-reference condition by monitoring the dose variations from electron beams with different field sizes and 45° wedge and the dose variations from photon beams with different field sizes and source-skin distance. Methods: Both TLDs and finger ionization chambers were placed at a depth of 10 cm in water to measure the absorbed dose from photon beams, and also placed at the depth of maximum dose from electron beams under non-reference condition. TLDs were then mailed to National Institute for Radiological Protection, China CDC for further measurement. Results: Among the 70 measuring points for photon beams, 58 points showed the results with a relative error less than ±7.0% (IAEA's acceptable deviation: ±7.0%) between TLDs and finger ionization chambers measurements, and the percentage of qualified point numbers was 82.8%. After corrected by Ps value, 62 points were qualified and the percentage was up to 88.6%. All of the measuring points for electron beams, with the total number of 24, presented a relative error within ±5.0% (IAEA's acceptable deviation: ±5.0%) between TLDs and finger ioization cylindrical chambers measurements. Conclusions: TLD-based quality audit is convenient for determining radiotherapy dosimetric parameters of electron beams in non-reference condition and can improve the accuracy of the measuring parameters in connection with finger chambers. For electron beams of 5 MeV < E_0 < 10 MeV, the absorbed dose parameters measured by finger ionization chambers, combined with TLD audit, can help obtain the precise and reliable results. (authors)

  19. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  20. A combined linear optimisation methodology for water resources allocation in Alfeios River Basin (Greece) under uncertain and vague system conditions

    Science.gov (United States)

    Bekri, Eleni; Yannopoulos, Panayotis; Disse, Markus

    2013-04-01

    In the present study, a combined linear programming methodology, based on Li et al. (2010) and Bekri et al. (2012), is employed for optimizing water allocation under uncertain system conditions in the Alfeios River Basin, in Greece. The Alfeios River is a water resources system of great natural, ecological, social and economic importance for Western Greece, since it has the longest and highest flow rate watercourse in the Peloponnisos region. Moreover, the river basin was exposed in the last decades to a plethora of environmental stresses (e.g. hydrogeological alterations, intensively irrigated agriculture, surface and groundwater overexploitation and infrastructure developments), resulting in the degradation of its quantitative and qualitative characteristics. As in most Mediterranean countries, water resource management in Alfeios River Basin has been focused up to now on an essentially supply-driven approach. It is still characterized by a lack of effective operational strategies. Authority responsibility relationships are fragmented, and law enforcement and policy implementation are weak. The present regulated water allocation puzzle entails a mixture of hydropower generation, irrigation, drinking water supply and recreational activities. Under these conditions its water resources management is characterised by high uncertainty and by vague and imprecise data. The considered methodology has been developed in order to deal with uncertainties expressed as either probability distributions, or/and fuzzy boundary intervals, derived by associated α-cut levels. In this framework a set of deterministic submodels is studied through linear programming. The ad hoc water resources management and alternative management patterns in an Alfeios subbasin are analyzed and evaluated under various scenarios, using the above mentioned methodology, aiming to promote a sustainable and equitable water management. Li, Y.P., Huang, G.H. and S.L., Nie, (2010), Planning water resources

  1. Gust factor based on research aircraft measurements: A new methodology applied to the Arctic marine boundary layer

    DEFF Research Database (Denmark)

    Suomi, Irene; Lüpkes, Christof; Hartmann, Jörg

    2016-01-01

    There is as yet no standard methodology for measuring wind gusts from a moving platform. To address this, we have developed a method to derive gusts from research aircraft data. First we evaluated four different approaches, including Taylor's hypothesis of frozen turbulence, to derive the gust...... in unstable conditions (R2=0.52). The mean errors for all methods were low, from -0.02 to 0.05, indicating that wind gust factors can indeed be measured from research aircraft. Moreover, we showed that aircraft can provide gust measurements within the whole boundary layer, if horizontal legs are flown...

  2. Methodological possibilities for using the electron and ion energy balance in thermospheric complex measurements

    International Nuclear Information System (INIS)

    Serafimov, K.B.; Serafimova, M.K.

    1991-01-01

    Combination of ground based measurements for determination of basic thermospheric characteristics is proposed . An expression for the energy transport between components of space plasma is also derived and discussed within the framework of the presented methodology which could be devided into the folowing major sections: 1) application of ionosonde, absorption measurements, TEC-measurements using Faradey radiation or the differential Doppler effect; 2) ground-based airglow measurements; 3) airglow and palsma satelite measurements. 9 refs

  3. Drosophila Courtship Conditioning As a Measure of Learning and Memory.

    Science.gov (United States)

    Koemans, Tom S; Oppitz, Cornelia; Donders, Rogier A T; van Bokhoven, Hans; Schenck, Annette; Keleman, Krystyna; Kramer, Jamie M

    2017-06-05

    Many insights into the molecular mechanisms underlying learning and memory have been elucidated through the use of simple behavioral assays in model organisms such as the fruit fly, Drosophila melanogaster. Drosophila is useful for understanding the basic neurobiology underlying cognitive deficits resulting from mutations in genes associated with human cognitive disorders, such as intellectual disability (ID) and autism. This work describes a methodology for testing learning and memory using a classic paradigm in Drosophila known as courtship conditioning. Male flies court females using a distinct pattern of easily recognizable behaviors. Premated females are not receptive to mating and will reject the male's copulation attempts. In response to this rejection, male flies reduce their courtship behavior. This learned reduction in courtship behavior is measured over time, serving as an indicator of learning and memory. The basic numerical output of this assay is the courtship index (CI), which is defined as the percentage of time that a male spends courting during a 10 min interval. The learning index (LI) is the relative reduction of CI in flies that have been exposed to a premated female compared to naïve flies with no previous social encounters. For the statistical comparison of LIs between genotypes, a randomization test with bootstrapping is used. To illustrate how the assay can be used to address the role of a gene relating to learning and memory, the pan-neuronal knockdown of Dihydroxyacetone phosphate acyltransferase (Dhap-at) was characterized here. The human ortholog of Dhap-at, glyceronephosphate O-acyltransferase (GNPT), is involved in rhizomelic chondrodysplasia punctata type 2, an autosomal-recessive syndrome characterized by severe ID. Using the courtship conditioning assay, it was determined that Dhap-at is required for long-term memory, but not for short-term memory. This result serves as a basis for further investigation of the underlying molecular

  4. 42 CFR 486.318 - Condition: Outcome measures.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition: Outcome measures. 486.318 Section 486... Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a..., territories, or possessions, an OPO must meet all 3 of the following outcome measures: (1) The OPO's donation...

  5. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E

    2015-01-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)

  6. Optimization extraction conditions for improving phenolic content and antioxidant activity in Berberis asiatica fruits using response surface methodology (RSM).

    Science.gov (United States)

    Belwal, Tarun; Dhyani, Praveen; Bhatt, Indra D; Rawal, Ranbeer Singh; Pande, Veena

    2016-09-15

    This study for the first time designed to optimize the extraction of phenolic compounds and antioxidant potential of Berberis asiatica fruits using response surface methodology (RSM). Solvent selection was done based on the preliminary experiments and a five-factors-three-level, Central Composite Design (CCD). Extraction temperature (X1), sample to solvent ratio (X3) and solvent concentration (X5) significantly affect response variables. The quadratic model well fitted for all the responses. Under optimal extraction conditions, the dried fruit sample mixed with 80% methanol having 3.0 pH in a ratio of 1:50 and the mixture was heated at 80 °C for 30 min; the measured parameters was found in accordance with the predicted values. High Performance Liquid Chromatography (HPLC) analysis at optimized condition reveals 6 phenolic compounds. The results suggest that optimization of the extraction conditions is critical for accurate quantification of phenolics and antioxidants in Berberis asiatica fruits, which may further be utilized for industrial extraction procedure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Damage detection methodology under variable load conditions based on strain field pattern recognition using FBGs, nonlinear principal component analysis, and clustering techniques

    Science.gov (United States)

    Sierra-Pérez, Julián; Torres-Arredondo, M.-A.; Alvarez-Montoya, Joham

    2018-01-01

    Structural health monitoring consists of using sensors integrated within structures together with algorithms to perform load monitoring, damage detection, damage location, damage size and severity, and prognosis. One possibility is to use strain sensors to infer structural integrity by comparing patterns in the strain field between the pristine and damaged conditions. In previous works, the authors have demonstrated that it is possible to detect small defects based on strain field pattern recognition by using robust machine learning techniques. They have focused on methodologies based on principal component analysis (PCA) and on the development of several unfolding and standardization techniques, which allow dealing with multiple load conditions. However, before a real implementation of this approach in engineering structures, changes in the strain field due to conditions different from damage occurrence need to be isolated. Since load conditions may vary in most engineering structures and promote significant changes in the strain field, it is necessary to implement novel techniques for uncoupling such changes from those produced by damage occurrence. A damage detection methodology based on optimal baseline selection (OBS) by means of clustering techniques is presented. The methodology includes the use of hierarchical nonlinear PCA as a nonlinear modeling technique in conjunction with Q and nonlinear-T 2 damage indices. The methodology is experimentally validated using strain measurements obtained by 32 fiber Bragg grating sensors bonded to an aluminum beam under dynamic bending loads and simultaneously submitted to variations in its pitch angle. The results demonstrated the capability of the methodology for clustering data according to 13 different load conditions (pitch angles), performing the OBS and detecting six different damages induced in a cumulative way. The proposed methodology showed a true positive rate of 100% and a false positive rate of 1.28% for a

  8. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    P. Arulmathi

    2015-01-01

    Full Text Available Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD. The results showed that electrochemical treatment process effectively removed the COD (89.5% and color (95.1% of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm2, electrolysis time of 103.27 min, and electrolyte (NaCl concentration of 1.67 g/L, respectively.

  9. Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications

    Directory of Open Access Journals (Sweden)

    Nestar Russell

    2014-04-01

    Full Text Available In May 1962, social psychologist, Stanley Milgram, ran what was arguably the most controversial variation of his Obedience to Authority (OTA experiments: the Relationship Condition (RC. In the RC, participants were required to bring a friend, with one becoming the teacher and the other the learner. The learners were covertly informed that the experiment was actually exploring whether their friend would obey an experimenter’s orders to hurt them. Learners were quickly trained in how to react to the impending “shocks”. Only 15 percent of teachers completed the RC. In an article published in 1965, Milgram discussed most of the variations on his baseline experiment, but only named the RC in passing, promising a more detailed account in his forthcoming book. However, his 1974 book failed to mention the RC and it remained unpublished until François Rochat and Andre Modigliani discovered it in Milgram’s personal archive in 1997 at Yale University. Their overview of the RC’s procedure and results left a number of questions unanswered. For example, what were the etiological origins of the RC? Why did Milgram decide against publishing this experiment? And does the RC have any significant methodological or theoretical implications on the Obedience studies discourse? Based on documents obtained from Milgram’s personal archive, the aim of this article is to shed new light on these questions.

  10. Radionuclide measurements, via different methodologies, as tool for geophysical studies on Mt. Etna

    Energy Technology Data Exchange (ETDEWEB)

    Morelli, D., E-mail: daniela.morelli@ct.infn.it [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Istituto Nazionale di Fisica Nucleare- Sezione di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Imme, G. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Istituto Nazionale di Fisica Nucleare- Sezione di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Altamore, I.; Cammisa, S. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Giammanco, S. [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, Piazza Roma, 2, I-95123 Catania (Italy); La Delfa, S. [Dipartimento di Scienze Geologiche, Universita di Catania, Corso Italia,57 I-95127 Catania (Italy); Mangano, G. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Neri, M. [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, Piazza Roma, 2, I-95123 Catania (Italy); Patane, G. [Dipartimento di Scienze Geologiche, Universita di Catania, Corso Italia,57 I-95127 Catania (Italy)

    2011-10-01

    Natural radioactivity measurements represent an interesting tool to study geodynamical events or soil geophysical characteristics. In this direction we carried out, in the last years, several radionuclide monitoring both in the volcanic and tectonic areas of the oriental Sicily. In particular we report in-soil Radon investigations, in a tectonic area, including both laboratory and in-site measurements, applying three different methodologies, based on both active and passive detection systems. The active detection devices consisted of solid-state silicon detectors equipped in portable systems for short-time measurements and for long-time monitoring. The passive technique consisted of solid-state nuclear track detectors (SSNTD), CR-39 type, and allowed integrated measurements. The performances of the three methodologies were compared according to different kinds of monitoring. In general the results obtained with the three methodologies seem in agreement with each other and reflect the tectonic settings of the investigated area.

  11. Development of a field measurement methodology for studying the thermal indoor environment in hybrid GEOTABS buildings

    DEFF Research Database (Denmark)

    Kazanci, Ongun Berk; Khovalyg, Dolaana; Olesen, Bjarne W.

    2018-01-01

    buildings. The three demonstration buildings were an office building in Luxembourg, an elderly care home in Belgium, and an elementary school in Czech Republic. All of these buildings are equipped with hybrid GEOTABS systems; however, they vary in size and function, which requires a unique measurement...... methodology for studying them. These buildings already have advanced Building Management Systems (BMS); however, a more detailed measurement plan was needed for the purposes of the project to document the current performance of these systems regarding thermal indoor environment and energy performance......, and to be able to document the improvements after the implementation of the MPC. This study provides the details of the developed field measurement methodology for each of these buildings to study the indoor environmental quality (IEQ) in details. The developed measurement methodology can be applied to other...

  12. Radionuclide measurements, via different methodologies, as tool for geophysical studies on Mt. Etna

    International Nuclear Information System (INIS)

    Morelli, D.; Imme, G.; Altamore, I.; Cammisa, S.; Giammanco, S.; La Delfa, S.; Mangano, G.; Neri, M.; Patane, G.

    2011-01-01

    Natural radioactivity measurements represent an interesting tool to study geodynamical events or soil geophysical characteristics. In this direction we carried out, in the last years, several radionuclide monitoring both in the volcanic and tectonic areas of the oriental Sicily. In particular we report in-soil Radon investigations, in a tectonic area, including both laboratory and in-site measurements, applying three different methodologies, based on both active and passive detection systems. The active detection devices consisted of solid-state silicon detectors equipped in portable systems for short-time measurements and for long-time monitoring. The passive technique consisted of solid-state nuclear track detectors (SSNTD), CR-39 type, and allowed integrated measurements. The performances of the three methodologies were compared according to different kinds of monitoring. In general the results obtained with the three methodologies seem in agreement with each other and reflect the tectonic settings of the investigated area.

  13. An Updated Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hirt, Evelyn H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Coles, Garill A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bonebrake, Christopher A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ivans, William J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wootan, David W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mitchell, Mark R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-07-18

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment, as AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors and the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results on augmenting an initial methodology for enhanced risk monitors that integrate real-time information about equipment condition and POF into risk monitors. Methods to propagate uncertainty through the enhanced risk monitor are evaluated. Available data to quantify the level of uncertainty and the POF of key components are examined for their relevance, and a status update of this data evaluation is described. Finally, we describe potential targets for developing new risk metrics that may be useful for studying trade-offs for economic

  14. Technical Report on Preliminary Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ramuhalli, Pradeep; Coles, Garill A.; Coble, Jamie B.; Hirt, Evelyn H.

    2013-09-17

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. AdvSMRs may provide a longer-term alternative to traditional light-water reactors (LWRs) and SMRs based on integral pressurized water reactor concepts currently being considered. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment. AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors. Some of this loss can be recovered through reduced capital costs through smaller size, fewer components, modular fabrication processes, and the opportunity for modular construction. However, the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments that are a step towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results from an initial methodology for enhanced risk monitors by integrating real-time information about equipment condition and POF into risk monitors.

  15. Methodology of clinical measures of healthcare quality delivered to patients with cardiovascular diseases

    Directory of Open Access Journals (Sweden)

    Posnenkova O.M.

    2014-03-01

    Full Text Available The results of implementation the methodology proposed by American Colleague of Cardiology and American Heart Association (ACC/AHA for development of Russian clinical quality measures for patients with arterial hypertension, coronary heart disease and chronic heart failure. Created quality measures cover the key elements of medical care influencing directly on clinical outcomes of treatment.

  16. Characterization of gloss properties of differently treated polymer coating surfaces by surface clarity measurement methodology.

    Science.gov (United States)

    Gruber, Dieter P; Buder-Stroisznigg, Michael; Wallner, Gernot; Strauß, Bernhard; Jandel, Lothar; Lang, Reinhold W

    2012-07-10

    With one measurement configuration, existing gloss measurement methodologies are generally restricted to specific gloss levels. A newly developed image-analytical gloss parameter called "clarity" provides the possibility to describe the perceptual result of a broad range of different gloss levels with one setup. In order to analyze and finally monitor the perceived gloss of products, a fast and flexible method also for the automated inspection is highly demanded. The clarity parameter is very fast to calculate and therefore usable for fast in-line surface inspection. Coated metal specimens were deformed by varying degree and polished afterwards in order to study the clarity parameter regarding the quantification of varying surface gloss types and levels. In order to analyze the correlation with the human gloss perception a study was carried out in which experts were asked to assess gloss properties of a series of surface samples under standardized conditions. The study confirmed clarity to exhibit considerably better correlation to the human perception than alternative gloss parameters.

  17. The Methodology of Doppler-Derived Central Blood Flow Measurements in Newborn Infants

    Directory of Open Access Journals (Sweden)

    Koert A. de Waal

    2012-01-01

    Full Text Available Central blood flow (CBF measurements are measurements in and around the heart. It incorporates cardiac output, but also measurements of cardiac input and assessment of intra- and extracardiac shunts. CBF can be measured in the central circulation as right or left ventricular output (RVO or LVO and/or as cardiac input measured at the superior vena cava (SVC flow. Assessment of shunts incorporates evaluation of the ductus arteriosus and the foramen ovale. This paper describes the methodology of CBF measurements in newborn infants. It provides a brief overview of the evolution of Doppler ultrasound blood flow measurements, basic principles of Doppler ultrasound, and an overview of all used methodology in the literature. A general guide for interpretation and normal values with suggested cutoffs of CBFs are provided for clinical use.

  18. Measurement of the porosity of amorphous materials by gamma ray transmission methodology

    International Nuclear Information System (INIS)

    Pottker, Walmir Eno; Appoloni, Carlos Roberto

    2000-01-01

    In this work it is presented the measurement of the total porosity of TRe soil, Sandstone Berea rocks and porous ceramics samples. For the determination of the total porosity, the Arquimedes method (conventional) and the gamma ray transmission methodology were employed. The porosity measurement using the gamma methodology has a significant advantage respect to the conventional method due to the fast and non-destructive determination, and also for supplying results with a greater characterization in small scales, in relation to the heterogeneity of the porosity. The conventional methodology presents good results only for homogeneous samples. The experimental set up for the gamma ray transmission technique consisted of a 241 Am source (59,53 keV ), a NaI(Tl) scintillation detector, collimators, a XYZ micrometric table and standard gamma spectrometry electronics connected to a multichannel analyser. (author)

  19. Estimation of Aerodynamic Parameters in Conditions of Measurement

    Directory of Open Access Journals (Sweden)

    Htang Om Moung

    2017-01-01

    Full Text Available The paper discusses the problem of aircraft parameter identification in conditions of measurement noises. It is assumed that all the signals involved into the process of identification are subjects to measurement noises, that is measurement random errors normally distributed. The results of simulation are presented which show the relation between the noises standard deviations and the accuracy of identification.

  20. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios

    NARCIS (Netherlands)

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; van Riet, M M J; Hendriks, W H

    2017-01-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE)

  1. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios

    NARCIS (Netherlands)

    Schaafstra, F.J.W.C.; Doorn, van D.A.; Schonewille, J.T.; Riet, van M.M.J.; Visser, P.; Blok, M.C.; Hendriks, W.H.

    2017-01-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE)

  2. Measuring hand hygiene compliance rates in different special care settings: a comparative study of methodologies

    Directory of Open Access Journals (Sweden)

    Thyago Pereira Magnus

    2015-04-01

    Conclusions: Hand hygiene compliance was reasonably high in these units, as measured by direct observation. However, a lack of correlation with results obtained by other methodologies brings into question the validity of direct observation results, and suggests that periodic audits using other methods may be needed.

  3. Translating Oral Health-Related Quality of Life Measures: Are There Alternative Methodologies?

    Science.gov (United States)

    Brondani, Mario; He, Sarah

    2013-01-01

    Translating existing sociodental indicators to another language involves a rigorous methodology, which can be costly. Free-of-charge online translator tools are available, but have not been evaluated in the context of research involving quality of life measures. To explore the value of using online translator tools to develop oral health-related…

  4. The Self-Concept. Volume 1, A Review of Methodological Considerations and Measuring Instruments. Revised Edition.

    Science.gov (United States)

    Wylie, Ruth C.

    This volume of the revised edition describes and evaluates measurement methods, research designs, and procedures which have been or might appropriately be used in self-concept research. Working from the perspective that self-concept or phenomenal personality theories can be scientifically investigated, methodological flaws and questionable…

  5. Methodology to estimate parameters of an excitation system based on experimental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Saavedra-Montes, A.J. [Carrera 80 No 65-223, Bloque M8 oficina 113, Escuela de Mecatronica, Universidad Nacional de Colombia, Medellin (Colombia); Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Ramirez-Scarpetta, J.M. [Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Malik, O.P. [2500 University Drive N.W., Electrical and Computer Engineering Department, University of Calgary, Calgary, Alberta (Canada)

    2011-01-15

    A methodology to estimate the parameters of a potential-source controlled rectifier excitation system model is presented in this paper. The proposed parameter estimation methodology is based on the characteristics of the excitation system. A comparison of two pseudo random binary signals, two sampling periods for each one, and three estimation algorithms is also presented. Simulation results from an excitation control system model and experimental results from an excitation system of a power laboratory setup are obtained. To apply the proposed methodology, the excitation system parameters are identified at two different levels of the generator saturation curve. The results show that it is possible to estimate the parameters of the standard model of an excitation system, recording two signals and the system operating in closed loop with the generator. The normalized sum of squared error obtained with experimental data is below 10%, and with simulation data is below 5%. (author)

  6. Covariance methodology applied to uncertainties in I-126 disintegration rate measurements

    International Nuclear Information System (INIS)

    Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.

    1996-01-01

    The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)

  7. Methodology of ionizing radiation measurement, from x-ray equipment, for radiation protection

    International Nuclear Information System (INIS)

    Caballero, Katia C.S.; Borges, Jose C.

    1996-01-01

    Most of X-rays beam used for diagnostic, are short exposure time (milliseconds). Exception are those used in fluoroscopy. measuring instruments (area monitors with ionizing chambers or Geiger tubes) used in hospitals and clinics, in general, have characteristic answer time not adequate to X-rays beams length in time. Our objective was to analyse instruments available commercially, to prepare a measuring methodology for direct and secondary beams, in order to evaluate protection barriers for beams used in diagnostic radiology installations. (author)

  8. A methodology for supporting decisions on the establishment of protective measures after severe nuclear accidents

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Kollas, J.G.

    1994-06-01

    The objective of this report is to demonstrate the use of a methology supporting decisions on protective measures following severe nuclear accidents. A multicriteria decision analysis approach is recommended where value tradeoffs are postponed until the very last stage of the decision process. Use of efficient frontiers is made to exclude all technically inferior solutions and present the decision maker with all nondominated solutions. A choice among these solutions implies a value trade-off among the multiple criteria. An interactive computer packge has been developed where the decision maker can choose a point on the efficient frontier in the consequence space and immediately see the alternative in the decision space resulting in the chosen consequences. The methodology is demonstrated through an application on the choice among possible protective measures in contaminated areas of the former USSR after the Chernobyl accident. Two distinct cases are considered: First a decision is to be made only on the basis of the level of soil contamination with Cs-137 and the total cost of the chosen protective policy; Next the decision is based on the geographic dimension of the contamination ant the total cost. Three alternative countermeasure actions are considered for population segments living on soil contaminated at a certain level or in a specific geographic region: (a) relocation of the population; (b) improvement of the living conditions; and, (c) no countermeasures at all. This is final deliverable of the CEC-CIS Joint Study Project 2, Task 5: Decision-Aiding-System for Establishing Intervention Levels, performed under Contracts COSU-CT91-0007 and COSU-CT92-0021 with the Commission of European Communities through CEPN

  9. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review

    Directory of Open Access Journals (Sweden)

    Johnston Marie

    2007-03-01

    Full Text Available Abstract Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model? and ii methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?. Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological

  10. Quadratic measurement and conditional state preparation in an optomechanical system

    DEFF Research Database (Denmark)

    A. Brawley, George; Vanner, Michael A.; Bowen, Warwick P.

    2014-01-01

    We experimentally demonstrate, for the first time, quadratic measurement of mechanical motion in an optomechanical system. We use this nonlinear easurement to conditionally prepare classical non-Gaussian states of motion of a micro-mechanical oscillator.......We experimentally demonstrate, for the first time, quadratic measurement of mechanical motion in an optomechanical system. We use this nonlinear easurement to conditionally prepare classical non-Gaussian states of motion of a micro-mechanical oscillator....

  11. The Influence of Measurement Methodology on the Accuracy of Electrical Waveform Distortion Analysis

    Science.gov (United States)

    Bartman, Jacek; Kwiatkowski, Bogdan

    2018-04-01

    The present paper covers a review of documents that specify measurement methods of voltage waveform distortion. It also presents measurement stages of waveform components that are uncommon in the classic fundamentals of electrotechnics and signal theory, including the creation process of groups and subgroups of harmonics and interharmonics. Moreover, the paper discusses selected distortion factors of periodic waveforms and presents analyses that compare the values of these distortion indices. The measurements were carried out in the cycle per cycle mode and the measurement methodology that was used complies with the IEC 61000-4-7 norm. The studies showed significant discrepancies between the values of analyzed parameters.

  12. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    DEFF Research Database (Denmark)

    Smit Andersen, Jonas; Lerer, Sara Maria; Backhaus, Antje

    2017-01-01

    of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses......Local management of rainwater using stormwater control measures (SCMs) is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way...... quantitative and statistical methods to select Characteristic Rain Events (CREs) for a range of frequent return periods: weekly, bi-weekly, monthly, bi-monthly, and a single rarer event occurring only every 1–10 years. The methodology for selecting CREs is flexible and can be adjusted to any climatic settings...

  13. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions

    Directory of Open Access Journals (Sweden)

    Rymantas Kazys

    2015-08-01

    Full Text Available An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10−3 g/cm3 (1%.

  14. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions

    Science.gov (United States)

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-01-01

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10−3 g/cm3 (1%). PMID:26262619

  15. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions.

    Science.gov (United States)

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-08-07

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10(-3) g/cm(3) (1%).

  16. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  17. Methodology to measure strains at high temperatures using electrical strain gages with free filaments

    International Nuclear Information System (INIS)

    Atanazio Filho, Nelson N.; Gomes, Paulo T. Vida; Scaldaferri, Denis H.B.; Silva, Luiz L. da; Rabello, Emerson G.; Mansur, Tanius R.

    2013-01-01

    An experimental methodology used for strains measuring at high temperatures is show in this work. In order to do the measurements, it was used electric strain gages with loose filaments attached to a stainless steel 304 beam with specific cements. The beam has triangular shape and a constant thickness, so the strain is the same along its length. Unless the beam surface be carefully prepared, the strain gage attachment is not efficient. The showed results are for temperatures ranging from 20 deg C to 300 deg C, but the experimental methodology could be used to measure strains at a temperature up to 900 deg C. Analytical calculations based on solid mechanics were used to verify the strain gage electrical installation and the measured strains. At a first moment, beam deformations as a temperature function were plotted. After that, beam deformations with different weighs were plotted as a temperature function. The results shown allowed concluding that the experimental methodology is trustable to measure strains at temperatures up to 300 deg C. (author)

  18. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    Science.gov (United States)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  19. Major methodological constraints to the assessment of environmental status based on the condition of benthic communities

    Science.gov (United States)

    Medeiros, João Paulo; Pinto, Vanessa; Sá, Erica; Silva, Gilda; Azeda, Carla; Pereira, Tadeu; Quintella, Bernardo; Raposo de Almeida, Pedro; Lino Costa, José; José Costa, Maria; Chainho, Paula

    2014-05-01

    The Marine Strategy Framework Directive (MSFD) was published in 2008 and requires Member States to take the necessary measures to achieve or maintain good environmental status in aquatic ecosystems by the year of 2020. The MSFD indicates 11 qualitative descriptors for environmental status assessment, including seafloor integrity, using the condition of the benthic community as an assessment indicator. Member States will have to define monitoring programs for each of the MSFD descriptors based on those indicators in order to understand which areas are in a Good Environmental Status and what measures need to be implemented to improve the status of areas that fail to achieve that major objective. Coastal and offshore marine waters are not frequently monitored in Portugal and assessment tools have only been developed very recently with the implementation of the Water Framework Directive (WFD). The lack of historical data and knowledge on the constraints of benthic indicators in coastal areas requires the development of specific studies addressing this issue. The major objective of the current study was to develop and test and experimental design to assess impacts of offshore projects. The experimental design consisted on the seasonal and interannual assessment of benthic invertebrate communities in the area of future implementation of the structures (impact) and two potential control areas 2 km from the impact area. Seasonal benthic samples were collected at nine random locations within the impact and control areas in two consecutive years. Metrics included in the Portuguese benthic assessment tool (P-BAT) were calculated since this multimetric tool was proposed for the assessment of the ecological status in Portuguese coastal areas under the WFD. Results indicated a high taxonomic richness in this coastal area and no significant differences were found between impact and control areas, indicating the feasibility of establishing adequate control areas in marine

  20. Questionnaire on the measurement condition of distribution coefficient

    International Nuclear Information System (INIS)

    Takebe, Shinichi; Kimura, Hideo; Matsuzuru, Hideo

    2001-05-01

    The distribution coefficient is used for various transport models to evaluate the migration behavior of radionuclides in the environment and is very important parameter in environmental impact assessment of nuclear facility. The questionnaire was carried out for the purpose of utilizing for the proposal of the standard measuring method of distribution coefficient. This report is summarized the result of questionnairing on the sampling methods and storage condition, the pretreatment methods, the analysis items in the physical/chemical characteristics of the sample, and the distribution coefficient measuring method and the measurement conditions in the research institutes within country. (author)

  1. Luminosity measurement and beam condition monitoring at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, Jessica Lynn [DESY, Zeuthen (Germany)

    2015-07-01

    The BRIL system of CMS consists of instrumentation to measure the luminosity online and offline, and to monitor the LHC beam conditions inside CMS. An accurate luminosity measurement is essential to the CMS physics program, and measurement of the beam background is necessary to ensure safe operation of CMS. In expectation of higher luminosity and denser proton bunch spacing during LHC Run II, many of the BRIL subsystems are being upgraded and others are being added to complement the existing measurements. The beam condition monitor (BCM) consists of several sets of diamond sensors used to measure online luminosity and beam background with a single-bunch-crossing resolution. The BCM also detects when beam conditions become unfavorable for CMS running and may trigger a beam abort to protect the detector. The beam halo monitor (BHM) uses quartz bars to measure the background of the incoming beams at larger radii. The pixel luminosity telescope (PLT) consists of telescopes of silicon sensors designed to provide a CMS online and offline luminosity measurement. In addition, the forward hadronic calorimeter (HF) will deliver an independent luminosity measurement, making the whole system robust and allowing for cross-checks of the systematics. Data from each of the subsystems will be collected and combined in the BRIL DAQ framework, which will publish it to CMS and LHC. The current status of installation and commissioning results for the BRIL subsystems are given.

  2. Advanced haptic sensor for measuring human skin conditions

    Science.gov (United States)

    Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami

    2010-01-01

    This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.

  3. Methodologies for measuring travelers' risk perception of infectious diseases: A systematic review.

    Science.gov (United States)

    Sridhar, Shruti; Régner, Isabelle; Brouqui, Philippe; Gautret, Philippe

    2016-01-01

    Numerous studies in the past have stressed the importance of travelers' psychology and perception in the implementation of preventive measures. The aim of this systematic review was to identify the methodologies used in studies reporting on travelers' risk perception of infectious diseases. A systematic search for relevant literature was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. There were 39 studies identified. In 35 of 39 studies, the methodology used was that of a knowledge, attitude and practice (KAP) survey based on questionnaires. One study used a combination of questionnaires and a visual psychometric measuring instrument called the 'pictorial representation of illness and self-measurement" or PRISM. One study used a self-representation model (SRM) method. Two studies measured psychosocial factors. Valuable information was obtained from KAP surveys showing an overall lack of knowledge among travelers about the most frequent travel-associated infections and associated preventive measures. This methodological approach however, is mainly descriptive, addressing knowledge, attitudes, and practices separately and lacking an examination of the interrelationships between these three components. Another limitation of the KAP method is underestimating psychosocial variables that have proved influential in health related behaviors, including perceived benefits and costs of preventive measures, perceived social pressure, perceived personal control, unrealistic optimism and risk propensity. Future risk perception studies in travel medicine should consider psychosocial variables with inferential and multivariate statistical analyses. The use of implicit measurements of attitudes could also provide new insights in the field of travelers' risk perception of travel-associated infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Personal dosimetry service of TECNATOM: measurement system and methodology of calibration

    International Nuclear Information System (INIS)

    Marchena, Paloma; Bravo, Borja

    2008-01-01

    Full text: The implementation of a new integrated and practical working tool called ALEDIN within the Personal Dosimetry Service (PDS) of TECNATOM, have harmonized the methodology for the counting acquisition, detector calibration and data analysis using a friendly Windows (registered mark) environment. The knowledge of this methodology, due to the fact that is the final product of a R and D project, will help the users and the Regulatory Body for a better understanding of the internal activity measurement in individuals, allowing a more precise error identification and correction, and improving the whole process of the internal dosimetry. The development and implementation of a new calibration system of the whole body counters using NaI (Tl) detectors and the utilization of a new humanoid anthropometric phantom, BOMAB type, with a uniform radioactive source distributions, allow a better energy and activity calibration for different counting geometries covering a wide range of gamma spectra from low energies, less than 100 keV to about 2000 keV for the high energies spectra. This new calibration methodology implied the development of an improved system for the determination of the isotopic activity. This new system has been integrated in a Windows (registered mark) environment, applicable for counting acquisition and data analysis in the whole body counters WBC in cross connection with the INDAC software, which allow the interpretation of the measured activity as committed effective dose following all the new ICRP recommendations and dosimetric models for internal dose and bioassay measurements. (author)

  5. A methodology to determine boundary conditions from forced convection experiments using liquid crystal thermography

    Science.gov (United States)

    Jakkareddy, Pradeep S.; Balaji, C.

    2017-02-01

    This paper reports the results of an experimental study to estimate the heat flux and convective heat transfer coefficient using liquid crystal thermography and Bayesian inference in a heat generating sphere, enclosed in a cubical Teflon block. The geometry considered for the experiments comprises a heater inserted in a hollow hemispherical aluminium ball, resulting in a volumetric heat generation source that is placed at the center of the Teflon block. Calibrated thermochromic liquid crystal sheets are used to capture the temperature distribution at the front face of the Teflon block. The forward model is the three dimensional conduction equation which is solved within the Teflon block to obtain steady state temperatures, using COMSOL. Match up experiments are carried out for various velocities by minimizing the residual between TLC and simulated temperatures for every assumed loss coefficient, to obtain a correlation of average Nusselt number against Reynolds number. This is used for prescribing the boundary condition for the solution to the forward model. A surrogate model obtained by artificial neural network built upon the data from COMSOL simulations is used to drive a Markov Chain Monte Carlo based Metropolis Hastings algorithm to generate the samples. Bayesian inference is adopted to solve the inverse problem for determination of heat flux and heat transfer coefficient from the measured temperature field. Point estimates of the posterior like the mean, maximum a posteriori and standard deviation of the retrieved heat flux and convective heat transfer coefficient are reported. Additionally the effect of number of samples on the performance of the estimation process has been investigated.

  6. Methodology for the assessment of measuring uncertainties of articulated arm coordinate measuring machines

    International Nuclear Information System (INIS)

    Romdhani, Fekria; Hennebelle, François; Ge, Min; Juillion, Patrick; Fontaine, Jean François; Coquet, Richard

    2014-01-01

    Articulated Arm Coordinate Measuring Machines (AACMMs) have gradually evolved and are increasingly used in mechanical industry. At present, measurement uncertainties relating to the use of these devices are not yet well quantified. The work carried out consists of determining the measurement uncertainties of a mechanical part by an AACMM. The studies aiming to develop a model of measurement uncertainty are based on the Monte Carlo method developed in Supplement 1 of the Guide to Expression of Uncertainty in Measurement [1] but also identifying and characterizing the main sources of uncertainty. A multi-level Monte Carlo approach principle has been developed which allows for characterizing the possible evolution of the AACMM during the measurement and quantifying in a second level the uncertainty on the considered measurand. The first Monte Carlo level is the most complex and is thus divided into three sub-levels, namely characterization on the positioning error of a point, estimation of calibration errors and evaluation of fluctuations of the ‘localization point’. The global method is thus presented and results of the first sub-level are particularly developed. The main sources of uncertainty, including AACMM deformations, are exposed. (paper)

  7. A new metric for measuring condition in large predatory sharks.

    Science.gov (United States)

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released. © 2014 The Fisheries Society of the British Isles.

  8. Atmospheric aerosol in an urban area: Comparison of measurement instruments and methodologies and pulmonary deposition assessment

    International Nuclear Information System (INIS)

    Berico, M.; Luciani, A.; Formignani, M.

    1996-07-01

    In March 1995 a measurement campaign of atmospheric aerosol in the Bologna urban area (Italy) was carried out. A transportable laboratory, set up by ENEA (Italian national Agency for New Technologies, Energy and the Environment) Environmental Department (Bologna), was utilized with instruments for measurement of atmospheric aerosol and meteorological parameters. The aim of this campaign was of dual purpose: to characterize aerosol in urban area and to compare different instruments and methodologies of measurements. Mass concentrations measurements, evaluated on a 23-hour period with total filter, PM10 dichotomous sampler and low pressure impactor (LPI Berner), have provided information respectively about total suspended particles, respirable fraction and granulometric parameters of aerosol. Eight meteorologic parameters, number concentration of submicromic fraction of aerosol and mass concentration of micromic fraction have been continually measured. Then, in a daytime period, several number granulometries of atmospheric aerosol have also been estimated by means of diffusion battery system. Results related to different measurement methodologies and granulometric characteristics of aerosol are presented here. Pulmonary deposition of atmospheric aerosol is finally calculated, using granulometries provided by LPI Brener and ICRP 66 human respiratory tract model

  9. Radiation measurements during cavities conditioning on APS RF test stand

    International Nuclear Information System (INIS)

    Grudzien, D.M.; Kustom, R.L.; Moe, H.J.; Song, J.J.

    1993-01-01

    In order to determine the shielding structure around the Advanced Photon Source (APS) synchrotron and storage ring RF stations, the X-ray radiation has been measured in the near field and far field regions of the RF cavities during the normal conditioning process. Two cavity types, a prototype 352-MHz single-cell cavity and a 352-MHz five-cell cavity, are used on the APS and are conditioned in the RF test stand. Vacuum measurements are also taken on a prototype 352-MHz single-cell cavity and a 352-MHz five-cell cavity. The data will be compared with data on the five-cell cavities from CERN

  10. Conditions of viscosity measurement for detecting irradiated peppers

    International Nuclear Information System (INIS)

    Hayashi, Toru; Todoriki, Setsuko; Okadome, Hiroshi; Kohyama, Kaoru

    1995-01-01

    Viscosity of gelatinized suspensions of black and white peppers decreased depending upon dose. The viscosity was influenced by gelatinization and viscosity measurement conditions. The difference between unirradiated pepper and an irradiated one was larger at a higher pH and temperature for gelatinization. A viscosity parameter normalized with the starch content of pepper sample and the viscosity of a 5% suspension of corn starch could get rid of the influence of the conditions for viscosity measurement such as type of viscometer, shear rate and temperature. (author)

  11. Optimization on Preparation Condition of Propolis Flavonoids Liposome by Response Surface Methodology and Research of Its Immunoenhancement Activity

    Directory of Open Access Journals (Sweden)

    Ju Yuan

    2013-01-01

    Full Text Available The aim of this study is to prepare propolis flavonoids liposome (PFL and optimize the preparation condition and to investigate further whether liposome could promote the immunoenhancement activity of propolis flavonoids (PF. PFL was prepared with ethanol injection method, and the preparation conditions of PFL were optimized with response surface methodology (RSM. Moreover, the immunoenhancement activity of PFL and PF in vitro was determined. The result showed that the optimal preparation conditions for PFL by response surface methodology were as follows: ratio of lipid to drug (w/w 9.6 : 1, ratio of soybean phospholipid to cholesterol (w/w 8.5 : 1, and speed of injection 0.8 mL·min−1. Under these conditions, the experimental encapsulation efficiency of PFL was 91.67 ± 0.21%, which was close to the predicted value. Therefore, the optimized preparation condition is very reliable. Moreover, the results indicated that PFL could not only significantly promote lymphocytes proliferation singly or synergistically with PHA, but also increase expression level of IL-2 and IFN-γ mRNA. These indicated that liposome could significantly improve the immunoenhancement activity of PF. PFL demonstrates the significant immunoenhancement activity, which provides the theoretical basis for the further experiment in vivo.

  12. A novel methodology for interpreting air quality measurements from urban streets using CFD modelling

    Science.gov (United States)

    Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming

    2011-09-01

    In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population

  13. Scientifically-methodological aspects of agroecological estimation of farmlands in the conditions of radioactive pollution

    International Nuclear Information System (INIS)

    Tsybul'ko, N.N.; Misyuchik, A.A.

    2009-01-01

    Methodical aspects of adaptive land tenure in the conditions of radioactive pollution on the basis of an agroecological estimation of the earths under the radiating factor and an estimation of influence of soil-landscape conditions on migration radionuclide are proved. (authors)

  14. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  15. A Methodology to Measure Synergy Among Energy-Efficiency Programs at the Program Participant Level

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E.

    2003-11-14

    This paper presents a methodology designed to measure synergy among energy-efficiency programs at the program participant level (e.g., households, firms). Three different definitions of synergy are provided: strong, moderate, and weak. Data to measure synergy can be collected through simple survey questions. Straightforward mathematical techniques can be used to estimate the three types of synergy and explore relative synergistic impacts of different subsets of programs. Empirical research is needed to test the concepts and methods and to establish quantitative expectations about synergistic relationships among programs. The market for new energy-efficient motors is the context used to illustrate all the concepts and methods in this paper.

  16. Measure a carbon impact methodology in line with a 2 degree scenario

    International Nuclear Information System (INIS)

    Coeslier, Manuel; Finidori, Esther; Smia, Ladislas

    2015-11-01

    Today, high expectations surround the measurement of carbon impact. Voluntary initiatives and - little by little - legislation push institutional investors to consider the impact that financial portfolios have on the climate and energy transition. However, current methods (of carbon footprint measurement) are not adequate to determine an investment portfolio's contribution to these issues. Current approaches, which do not take a life-cycle vision of carbon foot-printing, have the particular flaw of not accounting for emissions related to companies' products and services. The impact of these products and services on the climate is, however, crucial in many sectors - whether positively in the case of renewable energy and energy efficiency solutions, or negatively in the case of fossil fuels. Following this observation, Mirova and Carbone 4 decided to create a partnership dedicated to developing a new methodology capable of providing a carbon measurement that is aligned with the issues of energy transition: Carbon Impact Analytics (CIA). The CIA methodology focuses primarily on three indicators: - A measure of emissions 'induced' by a company's activity from a life-cycle approach, taking into account direct emissions as well as emissions from product suppliers; - A measure of the emissions which are 'avoided' due to efficiency efforts or deployment of 'low-carbon' solutions; - An overall evaluation that takes into account, in addition to carbon measurement, further information on the company's evolution and the type of capital or R and D expenditures. For these evaluations, the methodology employs a bottom-up approach in which each company is examined individually according to an evaluation framework adapted to each sector. Particular scrutiny is devoted to companies with a significant climate impact: energy producers, carbon-intensive sectors (industry, construction, transport), and providers of low-carbon equipment and solutions. Evaluations are then aggregated at

  17. Optimisation of Ultrasound-Assisted Extraction Conditions for Phenolic Content and Antioxidant Capacity from Euphorbia tirucalli Using Response Surface Methodology

    Science.gov (United States)

    Vuong, Quan V.; Goldsmith, Chloe D.; Dang, Trung Thanh; Nguyen, Van Tang; Bhuyan, Deep Jyoti; Sadeqzadeh, Elham; Scarlett, Christopher J.; Bowyer, Michael C.

    2014-01-01

    Euphorbia tirucalli (E. tirucalli) is now widely distributed around the world and is well known as a source of traditional medicine in many countries. This study aimed to utilise response surface methodology (RSM) to optimise ultrasonic-assisted extraction (UAE) conditions for total phenolic compounds (TPC) and antioxidant capacity from E. tirucalli leaf. The results showed that ultrasonic temperature, time and power effected TPC and antioxidant capacity; however, the effects varied. Ultrasonic power had the strongest influence on TPC; whereas ultrasonic temperature had the greatest impact on antioxidant capacity. Ultrasonic time had the least impact on both TPC and antioxidant capacity. The optimum UAE conditions were determined to be 50 °C, 90 min. and 200 W. Under these conditions, the E. tirucalli leaf extract yielded 2.93 mg GAE/g FW of TPC and exhibited potent antioxidant capacity. These conditions can be utilised for further isolation and purification of phenolic compounds from E. tirucalli leaf. PMID:26785074

  18. Conditioning a segmented stem profile model for two diameter measurements

    Science.gov (United States)

    Raymond L. Czaplewski; Joe P. Mcclure

    1988-01-01

    The stem profile model of Max and Burkhart (1976) is conditioned for dbh and a second upper stem measurement. This model was applied to a loblolly pine data set using diameter outside bark at 5.3m (i.e., height of 17.3 foot Girard form class) as the second upper stem measurement, and then compared to the original, unconditioned model. Variance of residuals was reduced...

  19. Effect of brewing conditions on antioxidant properties of rosehip tea beverage: study by response surface methodology.

    Science.gov (United States)

    İlyasoğlu, Huri; Arpa, Tuba Eda

    2017-10-01

    The aim of this study was to investigate the effects of brewing conditions (infusion time and temperature) on the antioxidant properties of rosehip tea beverage. The ascorbic acid content, total phenolic content (TPC), and ferric reducing antioxidant power (FRAP) of rosehip tea beverage were analysed. A two-factor and three-level central composite design was applied to evaluate the effects of the variables on the responses. The best quadratic models were obtained for all responses. The generated models were validated under the optimal conditions. At the optimal conditions, the rosehip tea beverage had 3.15 mg 100 mL -1 of ascorbic acid, 61.44 mg 100 mL -1 of TPC, and 2591 µmol of FRAP. The best brewing conditions for the rosehip tea beverage were found to be an infusion time of 6-8 min at temperatures of 84-86 °C.

  20. How Much Can Non-industry Standard Measurement Methodologies Benefit Methane Reduction Programs?

    Science.gov (United States)

    Risk, D. A.; O'Connell, L.; Atherton, E.

    2017-12-01

    In recent years, energy sector methane emissions have been recorded in large part by applying modern non-industry-standard techniques. Industry may lack the regulatory flexibility to use such techniques, or in some cases may not understand the possible associated economic advantage. As progressive jurisdictions move from estimation and towards routine measurement, the research community should provide guidance to help regulators and companies measure more effectively, and economically if possible. In this study, we outline a modelling experiment in which we explore the integration of non-industry-standard measurement techniques as part of a generalized compliance measurement program. The study was not intended to be exhaustive, or to recommend particular combinations, but instead to explore the inter-relationships between methodologies, development type, compliance practice. We first defined the role, applicable scale, detection limits, working distances, and approximate deployment cost of several measurement methodologies. We then considered a variety of development types differing mainly in footprint, density, and emissions "profile". Using a Monte Carlo approach, we evaluated the effect of these various factors on the cost and confidence of the compliance measurement program. We found that when added individually, some of the research techniques were indeed able to deliver an improvement in cost and/or confidence when used alongside industry-standard Optical Gas Imaging. When applied in combination, the ideal fraction of each measurement technique depended on development type, emission profile, and whether confidence or cost was more important. Results suggest that measurement cost and confidence could be improved if energy companies exploited a wider range of measurement techniques, and in a manner tailored to each development. In the short-term, combining clear scientific guidance with economic information could benefit immediate mitigation efforts over

  1. Conditional Standard Errors of Measurement for Scale Scores.

    Science.gov (United States)

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  2. Improved optimum condition for recovery and measurement of 210 ...

    African Journals Online (AJOL)

    The aim of this study was to determine the optimum conditions for deposition of 210Po and evaluate the accuracy and precision of the results for its determination in environmental samples. To improve the technique for measurement of polonium-210(210Po) in environmental samples. The optimization of five factors (volume ...

  3. Standardization of test conditions for gamma camera performance measurement

    International Nuclear Information System (INIS)

    Jordan, K.

    1980-01-01

    The actual way of measuring gamma camera performance is to use point sources or flood sources in air, often in combination with bar phantoms. This method mostly brings best performance parameters for cameras but it has nothing in common with the use of a camera in clinical practice. Particular in the case of low energy emitters, like Tc-99m, the influence of scattered radiation over the performance of cameras is very high. Therefore it is important to have test conditions of radionuclide imaging devices, that will approach as best as practicable the measuring conditions in clinical applications. It is therefore a good news that the International Electrochemical Commission IEC has prepared a draft 'Characteristics and test conditions of radionuclide imaging devices' which is now submitted to the national committees for formal approval under the Six Months' Rule. Some essential points of this document are discussed in the paper. (orig.) [de

  4. Presentation of a methodology for measuring social acceptance of three hydrogen storage technologies and preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Noirot, I.; Bigay, C. N.

    2005-07-01

    Technologies (MASIT). This methodology takes into account the following points of view : technical, economical, environmental, social and industrial/technological risks. MASIT is the methodology chosen to assess the hydrogen storage technologies developed during the StorHy project. With respect to the methodology, each point of view is defined by several criteria selected with car manufacturers and experts of each domain. Then, each criterion is quantified with the contribution of all partners involved in the project. While technical, economical and environmental criteria are quite objectives (easy to define and measure), the social dimension is subjective and has also a large variability as it depends on perception and measurement at an individual human level. So, the methodological work consists in the improvement of the MASIT methodology from the social point of view. This methodology is applicable for comparison of any other technologies and it has been implemented here to compare the storage technologies developed in the StorHy project for each application selected in the study (light vehicles, fleet vehicles, buses). (Author)

  5. Methodology of heat transfer and flow resistance measurement for matrices of rotating regenerative heat exchangers

    Directory of Open Access Journals (Sweden)

    Butrymowicz Dariusz

    2016-09-01

    Full Text Available The theoretical basis for the indirect measurement approach of mean heat transfer coefficient for the packed bed based on the modified single blow technique was presented and discussed in the paper. The methodology of this measurement approach dedicated to the matrix of the rotating regenerative gas heater was discussed in detail. The testing stand consisted of a dedicated experimental tunnel with auxiliary equipment and a measurement system are presented. Selected experimental results are presented and discussed for selected types of matrices of regenerative air preheaters for the wide range of Reynolds number of gas. The agreement between the theoretically predicted and measured temperature profiles was demonstrated. The exemplary dimensionless relationships between Colburn heat transfer factor, Darcy flow resistance factor and Reynolds number were presented for the investigated matrices of the regenerative gas heater.

  6. Radioactivity measurement of the liquid effluents of two university hospital methodology, problems arising

    International Nuclear Information System (INIS)

    Basse-Cathalinat, B.; Barthe, N.; Chatti, K.; Ducassou, D.

    2005-01-01

    The authors present methodology used to measure the radioactivity of the effluents at the output of two services of Nuclear medicine located in two Hospital complexes of the Area of Bordeaux. These measures are intended to answer at the requests of circular DGS/DHOS no 2001/323 of the Ministry for Employment and Solidarity. The selected method is more powerful since it is based on the use of a whole of spectrometry to very low background noise. These devices of measurements make it possible to take into account all the isotopes coming from a service of Nuclear medicine. The authors are conscious that of such measurements cannot be considered in all the services of Nuclear medicine. Other technical articles will specify simpler methods allowing a satisfactory management of the radioactive wastes. (author)

  7. Measurements of integrated components' parameters versus irradiation doses gamma radiation (60Co) dosimetry-methodology-tests

    International Nuclear Information System (INIS)

    Fuan, J.

    1991-01-01

    This paper describes the methodology used for the irradiation of the integrated components and the measurements of their parameters, using Quality Insurance of dosimetry: - Measurement of the integrated dose using the competences of the Laboratoire Central des Industries Electriques (LCIE): - Measurement of irradiation dose versus source/component distance, using a calibrated equipment. - Use of ALANINE dosimeters, placed on the support of the irradiated components. - Assembly and polarization of components during the irradiations. Selection of the irradiator. - Measurement of the irradiated components's parameters, using the competences of the societies: - GenRad: GR130 tests equipement placed in the DEIN/SIR-CEN SACLAY. - Laboratoire Central des Industries Electriques (LCIE): GR125 tests equipment and this associated programmes test [fr

  8. A Methodology for Measuring Microplastic Transport in Large or Medium Rivers

    Directory of Open Access Journals (Sweden)

    Marcel Liedermann

    2018-04-01

    Full Text Available Plastic waste as a persistent contaminant of our environment is a matter of increasing concern due to the largely unknown long-term effects on biota. Although freshwater systems are known to be the transport paths of plastic debris to the ocean, most research has been focused on marine environments. In recent years, freshwater studies have advanced rapidly, but they rarely address the spatial distribution of plastic debris in the water column. A methodology for measuring microplastic transport at various depths that is applicable to medium and large rivers is needed. We present a new methodology offering the possibility of measuring microplastic transport at different depths of verticals that are distributed within a profile. The net-based device is robust and can be applied at high flow velocities and discharges. Nets with different sizes (41 µm, 250 µm, and 500 µm are exposed in three different depths of the water column. The methodology was tested in the Austrian Danube River, showing a high heterogeneity of microplastic concentrations within one cross section. Due to turbulent mixing, the different densities of the polymers, aggregation, and the growth of biofilms, plastic transport cannot be limited to the surface layer of a river, and must be examined within the whole water column as for suspended sediments. These results imply that multipoint measurements are required for obtaining the spatial distribution of plastic concentration and are therefore a prerequisite for calculating the passing transport. The analysis of filtration efficiency and side-by-side measurements with different mesh sizes showed that 500 µm nets led to optimal results.

  9. Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches

    KAUST Repository

    Kuźniar, Maciej; Pereší ni, Peter; Kostić, Dejan; Canini, Marco

    2018-01-01

    and performance characteristics is essential for ensuring successful and safe deployments.We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a

  10. Don't fear 'fear conditioning': Methodological considerations for the design and analysis of studies on human fear acquisition, extinction, and return of fear.

    Science.gov (United States)

    Lonsdorf, Tina B; Menz, Mareike M; Andreatta, Marta; Fullana, Miguel A; Golkar, Armita; Haaker, Jan; Heitland, Ivo; Hermann, Andrea; Kuhn, Manuel; Kruse, Onno; Meir Drexler, Shira; Meulders, Ann; Nees, Frauke; Pittig, Andre; Richter, Jan; Römer, Sonja; Shiban, Youssef; Schmitz, Anja; Straube, Benjamin; Vervliet, Bram; Wendt, Julia; Baas, Johanna M P; Merz, Christian J

    2017-06-01

    The so-called 'replicability crisis' has sparked methodological discussions in many areas of science in general, and in psychology in particular. This has led to recent endeavours to promote the transparency, rigour, and ultimately, replicability of research. Originating from this zeitgeist, the challenge to discuss critical issues on terminology, design, methods, and analysis considerations in fear conditioning research is taken up by this work, which involved representatives from fourteen of the major human fear conditioning laboratories in Europe. This compendium is intended to provide a basis for the development of a common procedural and terminology framework for the field of human fear conditioning. Whenever possible, we give general recommendations. When this is not feasible, we provide evidence-based guidance for methodological decisions on study design, outcome measures, and analyses. Importantly, this work is also intended to raise awareness and initiate discussions on crucial questions with respect to data collection, processing, statistical analyses, the impact of subtle procedural changes, and data reporting specifically tailored to the research on fear conditioning. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Enzymatic scavenging of oxygen dissolved in water: Application of response surface methodology in optimization of conditions

    Directory of Open Access Journals (Sweden)

    Karimi Afzal

    2012-01-01

    Full Text Available In this work, removal of dissolved oxygen in water through reduction by glucose, which was catalyzed by glucose oxidase – catalase enzyme, was studied. Central composite design (CCD technique was applied to achieve optimum conditions for dissolved oxygen scavenging. Linear, square and interactions between effective parameters were obtained to develop a second order polynomial equation. The adequacy of the obtained model was evaluated by the residual plots, probability-value, coefficient of determination, and Fisher’s variance ratio test. Optimum conditions for activity of two enzymes in water deoxygenation were obtained as follows: pH=5.6, T=40°C, initial substrate concentration [S] = 65.5 mmol/L and glucose oxidase activity [E] = 252 U/Lat excess amount of catalase. The deoxygenation process during 30 seconds, in the optimal conditions, was predicted 98.2%. Practical deoxygenation in the predicted conditions was achieved to be 95.20% which was close to the model prediction.

  12. Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches

    KAUST Repository

    Kuźniar, Maciej

    2018-02-15

    Software-Defined Networking (SDN) and OpenFlow are actively being standardized and deployed. These deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful and safe deployments.We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a stream of rule updates, while relying on both observing the control plane view as reported by the switch and probing the data plane state to determine switch characteristics by comparing these views. We measure, report and explain the performance characteristics of flow table updates in six hardware OpenFlow switches. Our results describing rule update rates can help SDN designers make their controllers efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.

  13. A general centroid determination methodology, with application to multilayer dielectric structures and thermally stimulated current measurements

    International Nuclear Information System (INIS)

    Miller, S.L.; Fleetwood, D.M.; McWhorter, P.J.; Reber, R.A. Jr.; Murray, J.R.

    1993-01-01

    A general methodology is developed to experimentally characterize the spatial distribution of occupied traps in dielectric films on a semiconductor. The effects of parasitics such as leakage, charge transport through more than one interface, and interface trap charge are quantitatively addressed. Charge transport with contributions from multiple charge species is rigorously treated. The methodology is independent of the charge transport mechanism(s), and is directly applicable to multilayer dielectric structures. The centroid capacitance, rather than the centroid itself, is introduced as the fundamental quantity that permits the generic analysis of multilayer structures. In particular, the form of many equations describing stacked dielectric structures becomes independent of the number of layers comprising the stack if they are expressed in terms of the centroid capacitance and/or the flatband voltage. The experimental methodology is illustrated with an application using thermally stimulated current (TSC) measurements. The centroid of changes (via thermal emission) in the amount of trapped charge was determined for two different samples of a triple-layer dielectric structure. A direct consequence of the TSC analyses is the rigorous proof that changes in interface trap charge can contribute, though typically not significantly, to thermally stimulated current

  14. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements

    Science.gov (United States)

    do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-01-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID

  15. Development of plant condition measurement - The Jimah Model

    Science.gov (United States)

    Evans, Roy F.; Syuhaimi, Mohd; Mazli, Mohammad; Kamarudin, Nurliyana; Maniza Othman, Faiz

    2012-05-01

    The Jimah Model is an information management model. The model has been designed to facilitate analysis of machine condition by integrating diagnostic data with quantitative and qualitative information. The model treats data as a single strand of information - metaphorically a 'genome' of data. The 'Genome' is structured to be representative of plant function and identifies the condition of selected components (or genes) in each machine. To date in industry, computer aided work processes used with traditional industrial practices, have been unable to consistently deliver a standard of information suitable for holistic evaluation of machine condition and change. Significantly the reengineered site strategies necessary for implementation of this "data genome concept" have resulted in enhanced knowledge and management of plant condition. In large plant with high initial equipment cost and subsequent high maintenance costs, accurate measurement of major component condition becomes central to whole of life management and replacement decisions. A case study following implementation of the model at a major power station site in Malaysia (Jimah) shows that modeling of plant condition and wear (in real time) can be made a practical reality.

  16. Development of plant condition measurement - The Jimah Model

    International Nuclear Information System (INIS)

    Evans, Roy F; Syuhaimi, Mohd; Mazli, Mohammad; Kamarudin, Nurliyana; Othman, Faiz Maniza

    2012-01-01

    The Jimah Model is an information management model. The model has been designed to facilitate analysis of machine condition by integrating diagnostic data with quantitative and qualitative information. The model treats data as a single strand of information - metaphorically a 'genome' of data. The 'Genome' is structured to be representative of plant function and identifies the condition of selected components (or genes) in each machine. To date in industry, computer aided work processes used with traditional industrial practices, have been unable to consistently deliver a standard of information suitable for holistic evaluation of machine condition and change. Significantly the reengineered site strategies necessary for implementation of this 'data genome concept' have resulted in enhanced knowledge and management of plant condition. In large plant with high initial equipment cost and subsequent high maintenance costs, accurate measurement of major component condition becomes central to whole of life management and replacement decisions. A case study following implementation of the model at a major power station site in Malaysia (Jimah) shows that modeling of plant condition and wear (in real time) can be made a practical reality.

  17. A methodology for performing virtual measurements in a nuclear reactor system

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Uhrig, R.E.; Tsoukalas, L.H.

    1992-01-01

    A novel methodology is presented for monitoring nonphysically measurable variables in an experimental nuclear reactor. It is based on the employment of artificial neural networks to generate fuzzy values. Neural networks map spatiotemporal information (in the form of time series) to algebraically defined membership functions. The entire process can be thought of as a virtual measurement. Through such virtual measurements the values of nondirectly monitored parameters with operational significance, e.g., transient-type, valve-position, or performance, can be determined. Generating membership functions is a crucial step in the development and practical utilization of fuzzy reasoning, a computational approach that offers the advantage of describing the state of the system in a condensed, linguistic form, convenient for monitoring, diagnostics, and control algorithms

  18. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  19. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    Science.gov (United States)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  20. A smartphone-driven methodology for estimating physical activities and energy expenditure in free living conditions.

    Science.gov (United States)

    Guidoux, Romain; Duclos, Martine; Fleury, Gérard; Lacomme, Philippe; Lamaudière, Nicolas; Manenq, Pierre-Henri; Paris, Ludivine; Ren, Libo; Rousset, Sylvie

    2014-12-01

    This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. OPTIMIZATION OF SESAME SEEDS OIL EXTRACTION OPERATING CONDITIONS USING THE RESPONSE SURFACE DESIGN METHODOLOGY

    Directory of Open Access Journals (Sweden)

    HAITHAM OSMAN

    2016-12-01

    Full Text Available This paper applies Response Surface Design (RSD to model the experimental data obtained from the extraction of sesame seeds oil using n-hexane, chloroform and acetone as solvents under different operating conditions. The results obtained revealed that n-hexane outperformed the extraction obtained using chloroform and acetone. The developed model predicted that n-hexane with a rotational speed of 547 rpm and a contact time between the solvent and seeds of 19.46 hours with solvent: seeds ratio of 4.93, yields the optimum oil extracted of 37.03 %, outperforming chloroform and acetone models that gave prediction for 4.75 and 4.21 respectively. While the maximum predictions yield for chloroform is 6.73 %, under the operating conditions of 602 rpm, and 24 hours contact time, with a ratio of solvent: seeds of 1.74. On the other hand the acetone maximum prediction is only 4.37 %, with operational conditions of 467 rpm, and 6.00 hours contact time, with a ratio of solvent: seeds of 1. It is has been found that the maximum oil extraction yield obtained from the chloroform (6.73 % and Acetone (4.37 % is much lower than that predicted by n-hexane 37.03 %.

  2. Comparison of methodologies in determining bone marrow fat percentage under different environmental conditions.

    Science.gov (United States)

    Murden, David; Hunnam, Jaimie; De Groef, Bert; Rawlin, Grant; McCowan, Christina

    2017-01-01

    The use of bone marrow fat percentage has been recommended in assessing body condition at the time of death in wild and domestic ruminants, but few studies have looked at the effects of time and exposure on animal bone marrow. We investigated the utility of bone marrow fat extraction as a tool for establishing antemortem body condition in postmortem specimens from sheep and cattle, particularly after exposure to high heat, and compared different techniques of fat extraction for this purpose. Femora were collected from healthy and "skinny" sheep and cattle. The bones were either frozen or subjected to 40°C heat; heated bones were either wrapped in plastic to minimize desiccation or were left unwrapped. Marrow fat percentage was determined at different time intervals by oven-drying, or by solvent extraction using hexane in manual equipment or a Soxhlet apparatus. Extraction was performed, where possible, on both wet and dried tissue. Multiple samples were tested from each bone. Bone marrow fat analysis using a manual, hexane-based extraction technique was found to be a moderately sensitive method of assessing antemortem body condition of cattle up to 6 d after death. Multiple replicates should be analyzed where possible. Samples from "skinny" sheep showed a different response to heat from those of "healthy" sheep; "skinny" samples were so reduced in quantity by day 6 (the first sampling day) that no individual testing could be performed. Further work is required to understand the response of sheep marrow.

  3. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  4. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  5. Optimizing the conditions for hydrothermal liquefaction of barley straw for bio-crude oil production using response surface methodology

    DEFF Research Database (Denmark)

    Zhu, Zhe; Rosendahl, Lasse Aistrup; Toor, Saqib Sohail

    2018-01-01

    The present paper examines the conversion of barley straw to bio-crude oil (BO) via hydrothermal liquefaction. Response surface methodology based on central composite design was utilized to optimize the conditions of four independent variables including reaction temperature (factor X1, 260-340 oC...... phenols and their derivatives, acids, aromatic hydrocarbon, ketones, N-contained compounds and alcohols, which makes it a promising material in the applications of either bio-fuel or as a phenol substitute in bio-phenolic resins....

  6. Can ensemble condition in a hall be improved and measured?

    DEFF Research Database (Denmark)

    Gade, Anders Christian

    1988-01-01

    of the ceiling reflectors; and (c) changing the position of the orchestra on the platform. These variables were then tested in full scale experiments in the hall including subjective evaluation by the orchestra in order to verify their effects under practical conditions. New objective parameters, which showed......In collaboration with the Danish Broadcasting Corporation an extensive series of experiments has been carried out in The Danish Radio Concert Hall with the practical purpose of trying to improve the ensemble conditions on the platform for the resident symphony orchestra. First, a series...... very high correlations with the subjective data, also made it possible to compare the improvements with conditions as recently measured in famous European Halls. Besides providing the needed results, the experiments also shed some light on how musicians change their criteria for judging acoustic...

  7. Methods for measuring of fuel can deformation under radiation conditions

    International Nuclear Information System (INIS)

    Zelenchuk, A.V.; Fetisov, B.V.; Lakin, Yu.G.; Tonkov, V.Yu.

    1978-01-01

    The possibility for measuring fuel can deformation under radiation conditions by means of the acoustic method and tensoresistors is considered. The construction and operation of the in-pile facility for measuring creep of the fuel can specimen loaded by the internal pressure is described. The data on neutron radiation effect on changes in creep rate for zirconium fuel can are presented. The results obtained with tensoresistors are in a good agreement with those obtained by the acoustic method, which enables to recommend the use of both methods for the irradiation creep investigation of the fuel element cans

  8. A methodology for on-line calculation of temperature and thermal stress under non-linear boundary conditions

    International Nuclear Information System (INIS)

    Botto, D.; Zucca, S.; Gola, M.M.

    2003-01-01

    In the literature many works have been written dealing with the task of on-line calculation of temperature and thermal stress for machine components and structures, in order to evaluate fatigue damage accumulation and estimate residual life. One of the most widespread methodologies is the Green's function technique (GFT), by which machine parameters such as fluid temperatures, pressures and flow rates are converted into metal temperature transients and thermal stresses. However, since the GFT is based upon the linear superposition principle, it cannot be directly used in the case of varying heat transfer coefficients. In the present work, a different methodology is proposed, based upon CMS for temperature transient calculation and upon the GFT for the related thermal stress evaluation. This new approach allows variable heat transfer coefficients to be accounted for. The methodology is applied for two different case studies, taken from the literature: a thick pipe and a nozzle connected to a spherical head, both subjected to multiple convective boundary conditions

  9. Optimization of fermentation conditions for 1,3-propanediol production by marine Klebsiella pneumonia HSL4 using response surface methodology

    Science.gov (United States)

    Li, Lili; Zhou, Sheng; Ji, Huasong; Gao, Ren; Qin, Qiwei

    2014-09-01

    The industrially important organic compound 1,3-propanediol (1,3-PDO) is mainly used as a building block for the production of various polymers. In the present study, response surface methodology protocol was followed to determine and optimize fermentation conditions for the maximum production of 1,3-PDO using marine-derived Klebsiella pneumoniae HSL4. Four nutritional supplements together with three independent culture conditions were optimized as follows: 29.3 g/L glycerol, 8.0 g/L K2 HPO4, 7.6 g/L (NH4)2 SO4, 3.0 g/L KH2 PO4, pH 7.1, cultivation at 35°C for 12 h. Under the optimal conditions, a maximum 1,3-PDO concentration of 14.5 g/L, a productivity of 1.21 g/(L·h) and a conversion of glycerol of 0.49 g/g were obtained. In comparison with the control conditions, fermentation under the optimized conditions achieved an increase of 38.8% in 1,3-PDO concentration, 39.0% in productivity and 25.7% in glycerol conversion in flask. This enhancement trend was further confirmed when the fermentation was conducted in a 5-L fermentor. The optimized fermentation conditions could be an important basis for developing lowcost, large-scale methods for industrial production of 1,3-PDO in the future.

  10. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  11. Standardization of test conditions for gamma camera performance measurement

    International Nuclear Information System (INIS)

    Jordan, K.

    1982-02-01

    The way of measuring gamma camera performance is to use point sources or flood sources in air, often in combination with bar phantoms. This method has nothing in common with the use of a camera in clinical practice. Particularly in the case of low energy emitters, like Tc-99m, the influence of scattered radiation over the performance of cameras is very high. The IEC document 'Characteristics and test conditions of radionuclide imaging devices' is discussed

  12. Electrochemical noise measurements under pressurized water reactor conditions

    International Nuclear Information System (INIS)

    Van Nieuwenhove, R.

    2000-01-01

    Electrochemical potential noise measurements on sensitized stainless steel pressure tubes under pressurized water reactor (PWR) conditions were performed for the first time. Very short potential spikes, believed to be associated to crack initiation events, were detected when stressing the sample above the yield strength and increased in magnitude until the sample broke. Sudden increases of plastic deformation, as induced by an increased tube pressure, resulted in slower, high-amplitude potential transients, often accompanied by a reduction in noise level

  13. Measurement and verification of low income energy efficiency programs in Brazil: Methodological challenges

    Energy Technology Data Exchange (ETDEWEB)

    Martino Jannuzzi, Gilberto De; Rodrigues da Silva, Ana Lucia; Melo, Conrado Augustus de; Paccola, Jose Angelo; Dourado Maia Gomes, Rodolfo (State Univ. of Campinas, International Energy Initiative (Brazil))

    2009-07-01

    Electric utilities in Brazil are investing about 80 million dollars annually in low-income energy efficiency programs, about half of their total compulsory investments in end-use efficiency programs under current regulation. Since 2007 the regulator has enforced the need to provide evaluation plans for the programs delivered. This paper presents the measurement and verification (MandV) methodology that has been developed to accommodate the characteristics of lighting and refrigerator programs that have been introduced in the Brazilian urban and peri-urban slums. A combination of household surveys, end-use measurements and metering at the transformers and grid levels were performed before and after the program implementation. The methodology has to accommodate the dynamics, housing, electrical wiring and connections of the population as well as their ability to pay for the electricity and program participation. Results obtained in slums in Rio de Janeiro are presented. Impacts of the programs were evaluated in energy terms to households and utilities. Feedback from the evaluations performed also permitted the improvement in the design of new programs for low-income households.

  14. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    International Nuclear Information System (INIS)

    2013-01-01

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results

  15. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-12-15

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results.

  16. Probiotics production and alternative encapsulation methodologies to improve their viabilities under adverse environmental conditions.

    Science.gov (United States)

    Coghetto, Chaline Caren; Brinques, Graziela Brusch; Ayub, Marco Antônio Záchia

    2016-12-01

    Probiotic products are dietary supplements containing live microorganisms producing beneficial health effects on the host by improving intestinal balance and nutrient absorption. Among probiotic microorganisms, those classified as lactic acid bacteria are of major importance to the food and feed industries. Probiotic cells can be produced using alternative carbon and nitrogen sources, such as agroindustrial residues, at the same time contributing to reduce process costs. On the other hand, the survival of probiotic cells in formulated food products, as well as in the host gut, is an essential nutritional aspect concerning health benefits. Therefore, several cell microencapsulation techniques have been investigated as a way to improve cell viability and survival under adverse environmental conditions, such as the gastrointestinal milieu of hosts. In this review, different aspects of probiotic cells and technologies of their related products are discussed, including formulation of culture media, and aspects of cell microencapsulation techniques required to improve their survival in the host.

  17. An ultrasonic methodology for muscle cross section measurement of support space flight

    Science.gov (United States)

    Hatfield, Thomas R.; Klaus, David M.; Simske, Steven J.

    2004-09-01

    The number one priority for any manned space mission is the health and safety of its crew. The study of the short and long term physiological effects on humans is paramount to ensuring crew health and mission success. One of the challenges associated in studying the physiological effects of space flight on humans, such as loss of bone and muscle mass, has been that of readily attaining the data needed to characterize the changes. The small sampling size of astronauts, together with the fact that most physiological data collection tends to be rather tedious, continues to hinder elucidation of the underlying mechanisms responsible for the observed changes that occur in space. Better characterization of the muscle loss experienced by astronauts requires that new technologies be implemented. To this end, we have begun to validate a 360° ultrasonic scanning methodology for muscle measurements and have performed empirical sampling of a limb surrogate for comparison. Ultrasonic wave propagation was simulated using 144 stations of rotated arm and calf MRI images. These simulations were intended to provide a preliminary check of the scanning methodology and data analysis before its implementation with hardware. Pulse-echo waveforms were processed for each rotation station to characterize fat, muscle, bone, and limb boundary interfaces. The percentage error between MRI reference values and calculated muscle areas, as determined from reflection points for calf and arm cross sections, was -2.179% and +2.129%, respectively. These successful simulations suggest that ultrasound pulse scanning can be used to effectively determine limb cross-sectional areas. Cross-sectional images of a limb surrogate were then used to simulate signal measurements at several rotation angles, with ultrasonic pulse-echo sampling performed experimentally at the same stations on the actual limb surrogate to corroborate the results. The objective of the surrogate sampling was to compare the signal

  18. Thermophysical Properties Measurement of High-Temperature Liquids Under Microgravity Conditions in Controlled Atmospheric Conditions

    Science.gov (United States)

    Watanabe, Masahito; Ozawa, Shumpei; Mizuno, Akotoshi; Hibiya, Taketoshi; Kawauchi, Hiroya; Murai, Kentaro; Takahashi, Suguru

    2012-01-01

    Microgravity conditions have advantages of measurement of surface tension and viscosity of metallic liquids by the oscillating drop method with an electromagnetic levitation (EML) device. Thus, we are preparing the experiments of thermophysical properties measurements using the Materials-Science Laboratories ElectroMagnetic-Levitator (MSL-EML) facilities in the international Space station (ISS). Recently, it has been identified that dependence of surface tension on oxygen partial pressure (Po2) must be considered for industrial application of surface tension values. Effect of Po2 on surface tension would apparently change viscosity from the damping oscillation model. Therefore, surface tension and viscosity must be measured simultaneously in the same atmospheric conditions. Moreover, effect of the electromagnetic force (EMF) on the surface oscillations must be clarified to obtain the ideal surface oscillation because the EMF works as the external force on the oscillating liquid droplets, so extensive EMF makes apparently the viscosity values large. In our group, using the parabolic flight levitation experimental facilities (PFLEX) the effect of Po2 and external EMF on surface oscillation of levitated liquid droplets was systematically investigated for the precise measurements of surface tension and viscosity of high temperature liquids for future ISS experiments. We performed the observation of surface oscillations of levitated liquid alloys using PFLEX on board flight experiments by Gulfstream II (G-II) airplane operated by DAS. These observations were performed under the controlled Po2 and also under the suitable EMF conditions. In these experiments, we obtained the density, the viscosity and the surface tension values of liquid Cu. From these results, we discuss about as same as reported data, and also obtained the difference of surface oscillations with the change of the EMF conditions.

  19. Determination of Radiological, Material and Organizational Measures for Reuse of Conditionally Released Materials from Decommissioning

    International Nuclear Information System (INIS)

    Ondra, F.; Vasko, M.; Necas, V.

    2012-01-01

    An important part of nuclear installation decommissioning is conditional release of materials. The mass of conditionally released materials can significantly influence radioactive waste management and capacity of radioactive waste repository. The influence on a total decommissioning cost is also not negligible. Several scenarios for reuse of conditionally released materials were developed within CONRELMAT project. Each scenario contains preparation phase, construction phase and operation phase. For each above mentioned phase is needed to determine radiological, material, organizational and other constraints for conditionally released materials reuse to not break exposure limits for staff and public. Constraints are determined on the basis of external and internal exposure calculations in created models for selected takes in particular scenarios phases. The paper presents a developed methodology for determination of part of above mentioned constraints concerning external exposure of staff or public. Values of staff external exposure are also presented in paper to ensure that staff or public exposure does not break the limits. The methodology comprises a proposal of following constraints: radionuclide limit concentration of conditionally released materials for specific scenarios and nuclide vectors, specific deployment of conditionally released materials eventually shielding materials, staff and public during the scenario's phases, organizational measures concerning time of staff's or public's stay in the vicinity of conditionally released materials for individual performed scenarios and nuclide vectors. The paper further describes VISIPLAN 3D ALARA calculation planning software tool used for calculation of staff's and public's external exposure for individual scenarios. Several other parallel papers proposed for HND2012 are presenting selected details of the project.(author).

  20. Classification of heart valve condition using acoustic measurements

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Prosthetic heart valves and the many great strides in valve design have been responsible for extending the life spans of many people with serious heart conditions. Even though the prosthetic valves are extremely reliable, they are eventually susceptible to long-term fatigue and structural failure effects expected from mechanical devices operating over long periods of time. The purpose of our work is to classify the condition of in vivo Bjork-Shiley Convexo-Concave (BSCC) heart valves by processing acoustic measurements of heart valve sounds. The structural failures of interest for Bscc valves is called single leg separation (SLS). SLS can occur if the outlet strut cracks and separates from the main structure of the valve. We measure acoustic opening and closing sounds (waveforms) using high sensitivity contact microphones on the patient`s thorax. For our analysis, we focus our processing and classification efforts on the opening sounds because they yield direct information about outlet strut condition with minimal distortion caused by energy radiated from the valve disc.

  1. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    Science.gov (United States)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  2. Instrumentation for localized measurements in two-phase flow conditions

    International Nuclear Information System (INIS)

    Neff, G.G.; Averill, R.H.; Shurts, S.W.

    1979-01-01

    Three types of instrumentation that have been developed by EG and G Idaho, Inc., and its predecessor, Aerojet Nuclear company, at the Idaho National Engineering Laboratory to investigate two-phase flow phenomenon in a nuclear reactor at the Loss-of-Fluid Test (LOFT) facility are discussed: (a) a combination drag disc-turbine transducer (DTT), (b) a multibeam nuclear hardened gamma densitometer system, and (c) a conductivity sensitive liquid level transducer (LLT). The DTT obtains data on the complex problem of two-phase flow conditions in the LOFT primary coolant system during a loss-os-coolant experiment (LOCE). The discussion of the DTT describes how a turbine, measuring coolant velocity, and a drag disc, measuring coolant momentum flux, can provide valuable mass flow data. The nuclear hardened gamma densitometer is used to obtain density and flow regime information for two-phase flow in the LOFT primary coolant system during a LOCE. The LLT is used to measure water and steam conditions within the LOFT reactor core during a LOCE. The LLT design and the type of data obtained are described

  3. The application of conditioning paradigms in the measurement of pain.

    Science.gov (United States)

    Li, Jun-Xu

    2013-09-15

    Pain is a private experience that involves both sensory and emotional components. Animal studies of pain can only be inferred by their responses, and therefore the measurement of reflexive responses dominates the pain literature for nearly a century. It has been argued that although reflexive responses are important to unveil the sensory nature of pain in organisms, pain affect is equally important but largely ignored in pain studies primarily due to the lack of validated animal models. One strategy to begin to understand pain affect is to use conditioning principles to indirectly reveal the affective condition of pain. This review critically analyzed several procedures that are thought to measure affective learning of pain. The procedures regarding the current knowledge, the applications, and their advantages and disadvantages in pain research are discussed. It is proposed that these procedures should be combined with traditional reflex-based pain measurements in future studies of pain, which could greatly benefit both the understanding of neural underpinnings of pain and preclinical assessment of novel analgesics. © 2013 Elsevier B.V. All rights reserved.

  4. Measuring intracellular redox conditions using GFP-based sensors

    DEFF Research Database (Denmark)

    Björnberg, Olof; Ostergaard, Henrik; Winther, Jakob R

    2006-01-01

    Recent years have seen the development of methods for analyzing the redox conditions in specific compartments in living cells. These methods are based on genetically encoded sensors comprising variants of Green Fluorescent Protein in which vicinal cysteine residues have been introduced at solvent......-exposed positions. Several mutant forms have been identified in which formation of a disulfide bond between these cysteine residues results in changes of their fluorescence properties. The redox sensors have been characterized biochemically and found to behave differently, both spectroscopically and in terms...... of redox properties. As genetically encoded sensors they can be expressed in living cells and used for analysis of intracellular redox conditions; however, which parameters are measured depends on how the sensors interact with various cellular redox components. Results of both biochemical and cell...

  5. Prediction of work metabolism from heart rate measurements in forest work: some practical methodological issues.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Auger, Isabelle; Leone, Mario

    2015-01-01

    Individual heart rate (HR) to workload relationships were determined using 93 submaximal step-tests administered to 26 healthy participants attending physical activities in a university training centre (laboratory study) and 41 experienced forest workers (field study). Predicted maximum aerobic capacity (MAC) was compared to measured MAC from a maximal treadmill test (laboratory study) to test the effect of two age-predicted maximum HR Equations (220-age and 207-0.7 × age) and two clothing insulation levels (0.4 and 0.91 clo) during the step-test. Work metabolism (WM) estimated from forest work HR was compared against concurrent work V̇O2 measurements while taking into account the HR thermal component. Results show that MAC and WM can be accurately predicted from work HR measurements and simple regression models developed in this study (1% group mean prediction bias and up to 25% expected prediction bias for a single individual). Clothing insulation had no impact on predicted MAC nor age-predicted maximum HR equations. Practitioner summary: This study sheds light on four practical methodological issues faced by practitioners regarding the use of HR methodology to assess WM in actual work environments. More specifically, the effect of wearing work clothes and the use of two different maximum HR prediction equations on the ability of a submaximal step-test to assess MAC are examined, as well as the accuracy of using an individual's step-test HR to workload relationship to predict WM from HR data collected during actual work in the presence of thermal stress.

  6. Improving inferior vena cava filter retrieval rates with the define, measure, analyze, improve, control methodology.

    Science.gov (United States)

    Sutphin, Patrick D; Reis, Stephen P; McKune, Angie; Ravanzo, Maria; Kalva, Sanjeeva P; Pillai, Anil K

    2015-04-01

    To design a sustainable process to improve optional inferior vena cava (IVC) filter retrieval rates based on the Define, Measure, Analyze, Improve, Control (DMAIC) methodology of the Six Sigma process improvement paradigm. DMAIC, an acronym for Define, Measure, Analyze, Improve, and Control, was employed to design and implement a quality improvement project to increase IVC filter retrieval rates at a tertiary academic hospital. Retrievable IVC filters were placed in 139 patients over a 2-year period. The baseline IVC filter retrieval rate (n = 51) was reviewed through a retrospective analysis, and two strategies were devised to improve the filter retrieval rate: (a) mailing of letters to clinicians and patients for patients who had filters placed within 8 months of implementation of the project (n = 43) and (b) a prospective automated scheduling of a clinic visit at 4 weeks after filter placement for all new patients (n = 45). The effectiveness of these strategies was assessed by measuring the filter retrieval rates and estimated increase in revenue to interventional radiology. IVC filter retrieval rates increased from a baseline of 8% to 40% with the mailing of letters and to 52% with the automated scheduling of a clinic visit 4 weeks after IVC filter placement. The estimated revenue per 100 IVC filters placed increased from $2,249 to $10,518 with the mailing of letters and to $17,022 with the automated scheduling of a clinic visit. Using the DMAIC methodology, a simple and sustainable quality improvement intervention was devised that markedly improved IVC filter retrieval rates in eligible patients. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  7. METHODOLOGICAL APPROACHES TO FORMATION OF CONDITIONS OF TRANSITION TO STEADY DEVELOPMENT OF THE CREDIT ORGANIZATIONS OF REGION

    Directory of Open Access Journals (Sweden)

    O.I. Pechonik

    2006-03-01

    Full Text Available Formation of conditions of transition to steady development of the credit organizations assumes presence of scientific toolkit which should have methodological character and represent a set of scientific receptions, methods and principles of research to which definition given clause is devoted. The executed research has shown, that the logic and the scheme of the scientific analysis of processes of maintenance with bank service of economic system of region and formation of conditions of steady development of regional bank system should: to be based on statistical methods with use of system of national accounts in addition with the SWOT-analysis of bank system; formation of conditions of transition to steady development to be spent in a complex and comprehensively; management of process of transition to steady development of bank system should be carried out at active state participation within the limits of creation socially focused according to plan-market economy. At the given approach formation of conditions of transition of regional bank system on steady development, in our opinion, becomes possible.

  8. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  9. Measuring domestic water use: a systematic review of methodologies that measure unmetered water use in low-income settings.

    Science.gov (United States)

    Tamason, Charlotte C; Bessias, Sophia; Villada, Adriana; Tulsiani, Suhella M; Ensink, Jeroen H J; Gurley, Emily S; Mackie Jensen, Peter Kjaer

    2016-11-01

    To present a systematic review of methods for measuring domestic water use in settings where water meters cannot be used. We systematically searched EMBASE, PubMed, Water Intelligence Online, Water Engineering and Development Center, IEEExplore, Scielo, and Science Direct databases for articles that reported methodologies for measuring water use at the household level where water metering infrastructure was absent or incomplete. A narrative review explored similarities and differences between the included studies and provide recommendations for future research in water use. A total of 21 studies were included in the review. Methods ranged from single-day to 14-consecutive-day visits, and water use recall ranged from 12 h to 7 days. Data were collected using questionnaires, observations or both. Many studies only collected information on water that was carried into the household, and some failed to mention whether water was used outside the home. Water use in the selected studies was found to range from two to 113 l per capita per day. No standardised methods for measuring unmetered water use were found, which brings into question the validity and comparability of studies that have measured unmetered water use. In future studies, it will be essential to define all components that make up water use and determine how they will be measured. A pre-study that involves observations and direct measurements during water collection periods (these will have to be determined through questioning) should be used to determine optimal methods for obtaining water use information in a survey. Day-to-day and seasonal variation should be included. A study that investigates water use recall is warranted to further develop standardised methods to measure water use; in the meantime, water use recall should be limited to 24 h or fewer. © 2016 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  10. Enzymatic Phorbol Esters Degradation using the Germinated Jatropha Curcas Seed Lipase as Biocatalyst: Optimization Process Conditions by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Avita Kusuma Wardhani

    2016-10-01

    Full Text Available Utilization of Jatropha curcas seed cake is limited by the presence of phorbol esters (PE, which are the main toxic compound and heat stable. The objective of this research was to optimize the reaction conditions of the enzymatic PE degradation of the defatted Jatropha curcas seed cake (DJSC using the acetone-dried lipase from the germinated Jatropha curcas seeds as a biocatalyst. Response Surface Methodology (RSM using three-factors-three-levels Box-Behnken design was used to evaluate the effects of the reaction time, the ratio of buffer volume to DJSC, and the ratio of enzyme to DJSC on PE degradation. The results showed that the optimum conditions of PE degradation were 29.33 h, 51.11 : 6 (mL/g, and 30.10 : 5 (U/g cake for the reaction time, the ratio of buffer volume to DJSC, and the ratio of enzyme to DJSC, respectively. The predicted degradation of PE was 98.96% and not significantly different with the validated data of PE degradation. PE content was 0.035 mg/g, in which it was lower than PE in non-toxic Jatropha seeds. The results indicated that enzymatic degradation of PE might be a promising method for degradation of PE.  Copyright © 2016 BCREC GROUP. All rights reserved Received: 22nd December 2015; Revised: 1st April 2016; Accepted: 14th April 2016 How to Cite: Wardhani, A.K., Hidayat, C., Hastuti, P. (2016. Enzymatic Phorbol Esters Degradation using the Germinated Jatropha Curcas Seed Lipase as Biocatalyst: Optimization Process Conditions by Response Surface Methodology. Bulletin of Chemical Reaction Engineering & Catalysis, 11 (3: 346-353 (doi:10.9767/bcrec.11.3.574.346-353 Permalink/DOI: http://doi.org/10.9767/bcrec.11.3.574.346-353

  11. Thermal decomposition of hydroxylamine: Isoperibolic calorimetric measurements at different conditions

    International Nuclear Information System (INIS)

    Adamopoulou, Theodora; Papadaki, Maria I.; Kounalakis, Manolis; Vazquez-Carreto, Victor; Pineda-Solano, Alba; Wang, Qingsheng; Mannan, M.Sam

    2013-01-01

    Highlights: • Hydroxylamine thermal decomposition enthalpy was measured using larger quantities. • The rate at which heat is evolved depends on hydroxylamine concentration. • Decomposition heat is strongly affected by the conditions and the selected baseline. • The need for enthalpy measurements using a larger reactant mass is pinpointed. • Hydroxylamine decomposition in the presence of argon is much faster than in air. -- Abstract: Thermal decomposition of hydroxylamine, NH 2 OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130–150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30–80 ml solutions containing 1.4–20 g of pure hydroxylamine (2.8–40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3−5 kJ g −1 . The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate

  12. Thermal decomposition of hydroxylamine: Isoperibolic calorimetric measurements at different conditions

    Energy Technology Data Exchange (ETDEWEB)

    Adamopoulou, Theodora [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Papadaki, Maria I., E-mail: mpapadak@cc.uoi.gr [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Kounalakis, Manolis [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Vazquez-Carreto, Victor; Pineda-Solano, Alba [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States); Wang, Qingsheng [Department of Fire Protection and Safety and Department of Chemical Engineering, Oklahoma State University, 494 Cordell South, Stillwater, OK 74078 (United States); Mannan, M.Sam [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States)

    2013-06-15

    Highlights: • Hydroxylamine thermal decomposition enthalpy was measured using larger quantities. • The rate at which heat is evolved depends on hydroxylamine concentration. • Decomposition heat is strongly affected by the conditions and the selected baseline. • The need for enthalpy measurements using a larger reactant mass is pinpointed. • Hydroxylamine decomposition in the presence of argon is much faster than in air. -- Abstract: Thermal decomposition of hydroxylamine, NH{sub 2}OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130–150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30–80 ml solutions containing 1.4–20 g of pure hydroxylamine (2.8–40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3−5 kJ g{sup −1}. The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate.

  13. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures

    International Nuclear Information System (INIS)

    Karakitsios, Spyros P.; Sarigiannis, Dimosthenis A.; Gotti, Alberto; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2013-01-01

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose–response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several “what if” scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1 · 10 −5 compared to 23.4 · 10 −5 for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10–20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support. - Highlights

  14. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures

    Energy Technology Data Exchange (ETDEWEB)

    Karakitsios, Spyros P. [Aristotle University of Thessaloniki, Department of Chemical Engineering, 54124 Thessaloniki (Greece); Sarigiannis, Dimosthenis A., E-mail: denis@eng.auth.gr [Aristotle University of Thessaloniki, Department of Chemical Engineering, 54124 Thessaloniki (Greece); Centre for Research and Technology Hellas (CE.R.T.H.), 57001, Thessaloniki (Greece); Gotti, Alberto [Centre for Research and Technology Hellas (CE.R.T.H.), 57001, Thessaloniki (Greece); Kassomenos, Pavlos A. [University of Ioannina, Department of Physics, Laboratory of Meteorology, GR-45110 Ioannina (Greece); Pilidis, Georgios A. [University of Ioannina, Department of Biological Appl. and Technologies, GR-45110 Ioannina (Greece)

    2013-01-15

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose–response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several “what if” scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1 · 10{sup −5} compared to 23.4 · 10{sup −5} for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10–20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support

  15. Vibration condition measure instrument of motor using MEMS accelerometer

    Science.gov (United States)

    Chen, Jun

    2018-04-01

    In this work, a novel vibration condition measure instrument of motor using a digital micro accelerometer is proposed. In order to reduce the random noise found in the data, the sensor modeling is established and also the Kalman filter (KMF) is developed. According to these data from KMF, the maximum vibration displacement is calculated by the integration algorithm with the DC bias removed. The high performance micro controller unit (MCU) is used in the implementation of controller. By the IIC digital interface port, the data are transmitted from sensor to controller. The hardware circuits of the sensor and micro controller are designed and tested. With the computational formula of maximum displacement and FFT, the high precession results of displacement and frequency are gotten. Finally, the paper presents various experimental results to prove that this instrument is suitable for application in electrical motor vibration measurement.

  16. Non-pharmacological sleep interventions for youth with chronic health conditions: a critical review of the methodological quality of the evidence.

    Science.gov (United States)

    Brown, Cary A; Kuo, Melissa; Phillips, Leah; Berry, Robyn; Tan, Maria

    2013-07-01

    Restorative sleep is clearly linked with well-being in youth with chronic health conditions. This review addresses the methodological quality of non-pharmacological sleep intervention (NPSI) research for youth with chronic health conditions. The Guidelines for Critical Review (GCR) and the Effective Public Health Practice Project Quality Assessment Tool (EPHPP) were used in the review. The search yielded 31 behavioural and 10 non-behavioural NPSI for review. Most studies had less than 10 participants. Autism spectrum disorders, attention deficit/hyperactivity disorders, down syndrome, intellectual disabilities, and visual impairments were the conditions that most studies focused upon. The global EPHPP scores indicated most reviewed studies were of weak quality. Only 7 studies were rated as moderate, none were strong. Studies rated as weak quality frequently had recruitment issues; non-blinded participants/parents and/or researchers; and used outcome measures without sound psychometric properties. Little conclusive evidence exists for NPSIs in this population. However, NPSIs are widely used and these preliminary studies demonstrate promising outcomes. There have not been any published reports of negative outcomes that would preclude application of the different NPSIs on a case-by-case basis guided by clinical judgement. These findings support the need for more rigorous, applied research. • Methodological Quality of Sleep Research • Disordered sleep (DS) in youth with chronic health conditions is pervasive and is important to rehabilitation therapists because DS contributes to significant functional problems across psychological, physical and emotional domains. • Rehabilitation therapists and other healthcare providers receive little education about disordered sleep and are largely unaware of the range of assessment and non-pharmacological intervention strategies that exist. An evidence-based website of pediatric sleep resources can be found at http

  17. A methodology for the measure of secondary homes tourist flows at municipal level

    Directory of Open Access Journals (Sweden)

    Andrea Guizzardi

    2007-10-01

    Full Text Available The present public statistical system does not provide information concerning second houses touristic flows at sub-regional level. The lack limits local administrations' capabilities to take decisions about either: environmental, territorial and productive development, as well as regional governments in fair allocation of public financing. In the work, this information lack is overcome by proposing an indirect estimation methodology. Municipalities electric power consumption is proposed as an indicator of the stays on secondary homes. The indicator is connected to tourism flows considering both measurement errors and factors, modifying the local power demand. The application to Emilia-Romagna regional case allow to verify results’ coherence with officials statistics, as weel as to assess municipalities’ tourist vocation.

  18. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  19. Code coverage measurement methodology for MMI software of safety-class I and C system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Hyung; Jung, Beom Young; Choi, Seok Joo [Suresofttech, Seoul (Korea, Republic of)

    2016-10-15

    MMI (Man-Machine Interface) software of the safety instrumentation and control system used in nuclear power plants carry out an important functions, such as displaying and transmitting the commend to another system, and change setpoints the safety-related information. Yet, this has been recognized reliability of the MMI software plays an important role in enhancing nuclear power plants are operating, regulatory standards have been strengthened with it. Strengthening of regulatory standards has affected even perform software testing soon, and accordingly, the current regulatory require the measurement of code coverage with legal standard. In this paper, it poses a problem of the conventional method used for measuring the above-mentioned code coverage, presents a new coverage measuring method for solving the exposed problems. In this paper, we checked the problems such as limit and the low efficiency of the existing test coverage measuring method on the MMI software using in nuclear power instrumentation and control systems, and it proposed a new test coverage measuring method as a solution for this. If you apply a new method of Top-Down approach, can mitigate all of the problems of existing test coverage measurement methods and possible coverage achievement of the desired objectives. Of course, it is still necessary to secure more cases, and the methodology should be systematization based on the cases. Thus, if later the efficient and reliable are ensured through the application in many cases, as well as nuclear power instrumentation and control, may be used to ensure code coverage of software of the many areas where the GUI is utilized.

  20. Exhaled nitric oxide measurements in the first 2 years of life: methodological issues, clinical and epidemiological applications

    Directory of Open Access Journals (Sweden)

    de Benedictis Fernando M

    2009-07-01

    Full Text Available Abstract Fractional exhaled nitric oxide (FeNO is a useful tool to diagnose and monitor eosinophilic bronchial inflammation in asthmatic children and adults. In children younger than 2 years of age FeNO has been successfully measured both with the tidal breathing and with the single breath techniques. However, there are a number of methodological issues that need to be addressed in order to increase the reproducibility of the FeNO measurements within and between infants. Indeed, a standardized method to measure FeNO in the first 2 years of life would be extremely useful in order to meaningfully interpret FeNO values in this age group. Several factors related to the measurement conditions have been found to influence FeNO, such as expiratory flow, ambient NO and nasal contamination. Furthermore, the exposure to pre- and postnatal risk factors for respiratory morbidity has been shown to influence FeNO values. Therefore, these factors should always be assessed and their association with FeNO values in the specific study population should be evaluated and, eventually, controlled for. There is evidence consistently suggesting that FeNO is increased in infants with family history of atopy/atopic diseases and in infants with recurrent wheezing. These findings could support the hypothesis that eosinophilic bronchial inflammation is present at an early stage in those infants at increased risk of developing persistent respiratory symptoms and asthma. Furthermore, it has been shown that FeNO measurements could represent a useful tool to assess bronchial inflammation in other airways diseases, such as primary ciliary dyskinesia, bronchopulmonary dysplasia and cystic fibrosis. Further studies are needed in order to improve the reproducibility of the measurements, and large prospective studies are warranted in order to evaluate whether FeNO values measured in the first years of life can predict the future development of asthma or other respiratory diseases.

  1. Study on fermentation conditions of palm juice vinegar by response surface methodology and development of a kinetic model

    Directory of Open Access Journals (Sweden)

    S. Ghosh

    2012-09-01

    Full Text Available Natural vinegar is one of the fermented products which has some potentiality with respect to a nutraceutical standpoint. The present study is an optimization of the fermentation conditions for palm juice vinegar production from palm juice (Borassus flabellifer wine, this biochemical process being aided by Acetobacter aceti (NCIM 2251. The physical parameters of the fermentation conditions such as temperature, pH, and time were investigated by Response Surface Methodology (RSM with 2³ factorial central composite designs (CCD. The optimum pH, temperature and time were 5.5, 30 °C and 72 hrs for the highest yield of acetic acid (68.12 g / L. The quadratic model equation had a R² value of 0.992. RSM played an important role in elucidating the basic mechanisms in a complex situation, thus providing better process control by maximizing acetic acid production with the respective physical parameters. At the optimized conditions of temperature, pH and time and with the help of mathematical kinetic equations, the Monod specific growth rate ( µ max= 0.021 h-1, maximum Logistic specific growth rate ( µ 'max = 0.027 h-1 and various other kinetic parameters were calculated, which helped in validation of the experimental data. Therefore, the established kinetic models may be applied for the production of natural vinegar by fermentation of low cost palm juice.

  2. Optimization of the Conditions for Extraction of Serine Protease from Kesinai Plant (Streblus asper Leaves Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Md. Zaidul Islam Sarker

    2011-11-01

    Full Text Available Response surface methodology (RSM using a central composite design (CCD was employed to optimize the conditions for extraction of serine protease from kesinai (Streblus asper leaves. The effect of independent variables, namely temperature (42.5,47.5, X1, mixing time (2–6 min, X2, buffer content (0–80 mL, X3 and buffer pH (4.5–10.5, X4 on specific activity, storage stability, temperature and oxidizing agent stability of serine protease from kesinai leaves was investigated. The study demonstrated that use of the optimum temperature, mixing time, buffer content and buffer pH conditions protected serine protease during extraction, as demonstrated by low activity loss. It was found that the interaction effect of mixing time and buffer content improved the serine protease stability, and the buffer pH had the most significant effect on the specific activity of the enzyme. The most desirable conditions of 2.5 °C temperature, 4 min mixing time, 40 mL buffer at pH 7.5 was established for serine protease extraction from kesinai leaves.

  3. Abnormal condition and events analysis for instrumentation and control systems. Volume 1: Methodology for nuclear power plant digital upgrades. Final report

    International Nuclear Information System (INIS)

    McKemy, S.; Marcelli, M.; Boehmer, N.; Crandall, D.

    1996-01-01

    The ACES project was initiated to identify a cost-effective methodology for addressing abnormal conditions and events (ACES) in digital upgrades to nuclear power plant systems, as introduced by IEEE Standard 7-4.3.2-1993. Several methodologies and techniques currently in use in the defense, aerospace, and other communities for the assurance of digital safety systems were surveyed, and although several were shown to possess desirable qualities, non sufficiently met the needs of the nuclear power industry. This report describes a tailorable methodology for performing ACES analysis that is based on the more desirable aspects of the reviewed methodologies and techniques. The methodology is applicable to both safety- and non-safety-grade systems, addresses hardware, software, and system-level concerns, and can be applied in either a lifecycle or post-design timeframe. Employing this methodology for safety systems should facilitate the digital upgrade licensing process

  4. Modeling of the effect of freezer conditions on the principal constituent parameters of ice cream by using response surface methodology.

    Science.gov (United States)

    Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S

    2008-05-01

    A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.

  5. Force Measurements on a VAWT Blade in Parked Conditions

    Directory of Open Access Journals (Sweden)

    Anders Goude

    2017-11-01

    Full Text Available The forces on a turbine at extreme wind conditions when the turbine is parked is one of the most important design cases for the survivability of a turbine. In this work, the forces on a blade and its support arms have been measured on a 12 kW straight-bladed vertical axis wind turbine at an open site. Two cases are tested: one during electrical braking of the turbine, which allows it to rotate slowly, and one with the turbine mechanically fixed with the leading edge of the blade facing the main wind direction. The force variations with respect to wind direction are investigated, and it is seen that significant variations in forces depend on the wind direction. The measurements show that for the fixed case, when subjected to the same wind speed, the forces are lower when the blade faces the wind direction. The results also show that due to the lower forces at this particular wind direction, the average forces for the fixed blade are notably lower. Hence, it is possible to reduce the forces on a turbine blade, simply by taking the dominating wind direction into account when the turbine is parked. The measurements also show that a positive torque is generated from the blade for most wind directions, which causes the turbine to rotate in the electrically-braked case. These rotations will cause increased fatigue loads on the turbine blade.

  6. A methodology for supporting decisions on the establishment of protective measures after severe nuclear accidents. Final report

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Kollas, J.G.

    1994-06-01

    Full text: The objective of this report is to demonstrate the use of a methodology supporting decisions on protective measures following severe nuclear accidents. A multicriteria decision analysis approach is recommended where value tradeoffs are postponed until the very last stage of the decision process. Use of efficient frontiers is made to exclude all technically inferior solutions and present the decision maker with all non-dominated solutions. A choice among these solutions implies a value trade-off among the multiple criteria. An interactive computer package has been developed where the decision maker can choose a point on the efficient frontier in the consequence space and immediately see the alternative in the decision space resulting in the chosen consequences. The methodology is demonstrated through an application on the choice among possible protective measures in contaminated areas of the former USSR after the Chernobyl accident. Two distinct cases are considered: First a decision is to be made only on the basis of the level of soil contamination with Cs-137 and the total cost of the chosen protective policy; Next the decision is based on the geographic dimension of the contamination and the total cost. Three alternative countermeasure actions are considered for population segments living on soil contaminated at a certain level or in a specific geographic region: (a) relocation of the population; (b) improvement of the living conditions; and, (c) no countermeasures at all. This is the final deliverable of the CEC-CIS Joint Study Project 2, Task 5: Decision-Aiding-System for Establishing Intervention Levels, performed under Contracts COSU-CT91-0007 and COSU-CT92-0021 with the Commission of European Communities through CEPN. (author)

  7. Feasibility, strategy, methodology, and analysis of probe measurements in plasma under high gas pressure

    Science.gov (United States)

    Demidov, V. I.; Koepke, M. E.; Kurlyandskaya, I. P.; Malkov, M. A.

    2018-02-01

    This paper reviews existing theories for interpreting probe measurements of electron distribution functions (EDF) at high gas pressure when collisions of electrons with atoms and/or molecules near the probe are pervasive. An explanation of whether or not the measurements are realizable and reliable, an enumeration of the most common sources of measurement error, and an outline of proper probe-experiment design elements that inherently limit or avoid error is presented. Additionally, we describe recent expanded plasma-condition compatibility for EDF measurement, including in applications of large wall probe plasma diagnostics. This summary of the authors’ experiences gained over decades of practicing and developing probe diagnostics is intended to inform, guide, suggest, and detail the advantages and disadvantages of probe application in plasma research.

  8. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Airborne IPDA-Lidar Measurements: Methodology and Experimental Results

    Science.gov (United States)

    Ehret, G.; Amediek, A.; Wirth, M.; Fix, A.; Kiemle, C.; Quatrevalet, M.

    2016-12-01

    We report on a new method and on the first demonstration to quantify emission rates from strong greenhouse gas (GHG) point sources using airborne Integrated Path Differential Absorption (IPDA) Lidar measurements. In order to build trust in the self-reported emission rates by countries, verification against independent monitoring systems is a prerequisite to check the reported budget. A significant fraction of the total anthropogenic emission of CO2 and CH4 originates from localized strong point sources of large energy production sites or landfills. Both are not monitored with sufficiently accuracy by the current observation system. There is a debate whether airborne remote sensing could fill in the gap to infer those emission rates from budgeting or from Gaussian plume inversion approaches, whereby measurements of the GHG column abundance beneath the aircraft can be used to constrain inverse models. In contrast to passive sensors, the use of an active instrument like CHARM-F for such emission verification measurements is new. CHARM-F is a new airborne IPDA-Lidar devised for the German research aircraft HALO for the simultaneous measurement of the column-integrated dry-air mixing ratio of CO2 and CH4 commonly denoted as XCO2 und XCH4, respectively. It has successfully been tested in a serious of flights over Central Europe to assess its performance under various reflectivity conditions and in a strongly varying topography like the Alps. The analysis of a methane plume measured in crosswind direction of a coal mine ventilation shaft revealed an instantaneous emission rate of 9.9 ± 1.7 kt CH4 yr-1. We discuss the methodology of our point source estimation approach and give an outlook on the CoMet field experiment scheduled in 2017 for the measurement of anthropogenic and natural GHG emissions by a combination of active and passive remote sensing instruments on research aircraft.

  9. Comparison of ventilation measurement techniques in real conditions

    International Nuclear Information System (INIS)

    Jilek, K.; Tomasek, L.

    2001-01-01

    Ventilation and radon entry rate are the only two quantities that influence on indoor radon behaviour. In order to investigate the effect of ventilation and radon entry rate on indoor radon behaviour separately , the Institute was equipped with continuous monitor of carbon monoxide (CO). Carbon monoxide serves as a tracer gas for the determination of air exchange rate. The use of a continuous radon monitor and the continuous monitor of CO gas at the same time enables to measure the radon entry rate and the air exchange rate separately. In the lecture are summarized results of comparison of the following three basic methods performed in real living conditions: - constant decay method; - constant tracer method; and steady rate of tracer injection to determine the air exchange rate for 222 Rn and CO gas, which were used as tracer gases. (authors)

  10. Comparison of fungal spores concentrations measured with wideband integrated bioaerosol sensor and Hirst methodology

    Science.gov (United States)

    Fernández-Rodríguez, S.; Tormo-Molina, R.; Lemonis, N.; Clot, B.; O'Connor, D. J.; Sodeau, John R.

    2018-02-01

    The aim of this work was to provide both a comparison of traditional and novel methodologies for airborne spores detection (i.e. the Hirst Burkard trap and WIBS-4) and the first quantitative study of airborne fungal concentrations in Payerne (Western Switzerland) as well as their relation to meteorological parameters. From the traditional method -Hirst trap and microscope analysis-, sixty-three propagule types (spores, sporangia and hyphae) were identified and the average spore concentrations measured over the full period amounted to 4145 ± 263.0 spores/m3. Maximum values were reached on July 19th and on August 6th. Twenty-six spore types reached average levels above 10 spores/m3. Airborne fungal propagules in Payerne showed a clear seasonal pattern, increasing from low values in early spring to maxima in summer. Daily average concentrations above 5000 spores/m3 were almost constant in summer from mid-June onwards. Weather parameters showed a relevant role for determining the observed spore concentrations. Coniferous forest, dominant in the surroundings, may be a relevant source for airborne fungal propagules as their distribution and predominant wind directions are consistent with the origin. The comparison between the two methodologies used in this campaign showed remarkably consistent patterns throughout the campaign. A correlation coefficient of 0.9 (CI 0.76-0.96) was seen between the two over the time period for daily resolutions (Hirst trap and WIBS-4). This apparent co-linearity was seen to fall away once increased resolution was employed. However at higher resolutions upon removal of Cladosporium species from the total fungal concentrations (Hirst trap), an increased correlation coefficient was again noted between the two instruments (R = 0.81 with confidence intervals of 0.74 and 0.86).

  11. A novel methodology for online measurement of thoron using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Eappen, K.P.; Sapra, B.K.; Mayya, Y.S.

    2007-01-01

    The use of Lucas scintillation cell (LSC) technique for thoron estimation requires a modified methodology as opposed to radon estimation. While in the latter, the α counting is performed after a delay period varying between few hours to few days, in the case of thoron estimation the α counting has to be carried out immediately after sampling owing to the short half-life of thoron (55 s). This can be achieved best by having an on-line LSC sampling and counting system. However, half-life of the thoron decay product 212 Pb being 10.6 h, the background accumulates in LSC during online measurements and hence subsequent use of LSC is erroneous unless normal background level is achieved in the cell. This problem can be circumvented by correcting for the average background counts accumulated during the counting period which may be theoretically estimated. In this study, a methodology has been developed to estimate the true counts due to thoron. A linear regression between the counts obtained experimentally and the fractional decay in regular intervals of time is used to obtain the actual thoron concentration. The novelty of this approach is that the background of the cell is automatically estimated as the intercept of the regression graph. The results obtained by this technique compare well with the two filter method and the thoron concentration produced from a standard thoron source. However, the LSC as such cannot be used for environmental samples because the minimum detection level is comparable with that of thoron concentrations prevailing in normal atmosphere

  12. Measurement of Two-Phase Flow Characteristics Under Microgravity Conditions

    Science.gov (United States)

    Keshock, E. G.; Lin, C. S.; Edwards, L. G.; Knapp, J.; Harrison, M. E.; Xhang, X.

    1999-01-01

    This paper describes the technical approach and initial results of a test program for studying two-phase annular flow under the simulated microgravity conditions of KC-135 aircraft flights. A helical coil flow channel orientation was utilized in order to circumvent the restrictions normally associated with drop tower or aircraft flight tests with respect to two-phase flow, namely spatial restrictions preventing channel lengths of sufficient size to accurately measure pressure drops. Additionally, the helical coil geometry is of interest in itself, considering that operating in a microgravity environment vastly simplifies the two-phase flows occurring in coiled flow channels under 1-g conditions for virtually any orientation. Pressure drop measurements were made across four stainless steel coil test sections, having a range of inside tube diameters (0.95 to 1.9 cm), coil diameters (25 - 50 cm), and length-to-diameter ratios (380 - 720). High-speed video photographic flow observations were made in the transparent straight sections immediately preceding and following the coil test sections. A transparent coil of tygon tubing of 1.9 cm inside diameter was also used to obtain flow visualization information within the coil itself. Initial test data has been obtained from one set of KC-135 flight tests, along with benchmark ground tests. Preliminary results appear to indicate that accurate pressure drop data is obtainable using a helical coil geometry that may be related to straight channel flow behavior. Also, video photographic results appear to indicate that the observed slug-annular flow regime transitions agree quite reasonably with the Dukler microgravity map.

  13. Does Methodological Guidance Produce Consistency? A Review of Methodological Consistency in Breast Cancer Utility Value Measurement in NICE Single Technology Appraisals.

    Science.gov (United States)

    Rose, Micah; Rice, Stephen; Craig, Dawn

    2017-07-05

    Since 2004, National Institute for Health and Care Excellence (NICE) methodological guidance for technology appraisals has emphasised a strong preference for using the validated EuroQol 5-Dimensions (EQ-5D) quality-of-life instrument, measuring patient health status from patients or carers, and using the general public's preference-based valuation of different health states when assessing health benefits in economic evaluations. The aim of this study was to review all NICE single technology appraisals (STAs) for breast cancer treatments to explore consistency in the use of utility scores in light of NICE methodological guidance. A review of all published breast cancer STAs was undertaken using all publicly available STA documents for each included assessment. Utility scores were assessed for consistency with NICE-preferred methods and original data sources. Furthermore, academic assessment group work undertaken during the STA process was examined to evaluate the emphasis of NICE-preferred quality-of-life measurement methods. Twelve breast cancer STAs were identified, and many STAs used evidence that did not follow NICE's preferred utility score measurement methods. Recent STA submissions show companies using EQ-5D and mapping. Academic assessment groups rarely emphasized NICE-preferred methods, and queries about preferred methods were rare. While there appears to be a trend in recent STA submissions towards following NICE methodological guidance, historically STA guidance in breast cancer has generally not used NICE's preferred methods. Future STAs in breast cancer and reviews of older guidance should ensure that utility measurement methods are consistent with the NICE reference case to help produce consistent, equitable decision making.

  14. An evaluation of analysis methodologies for predicting cleavage arrest of a deep crack in an RPV subjected to PTS loading conditions

    International Nuclear Information System (INIS)

    Keeney-Walker, J.; Bass, B.R.

    1992-01-01

    Several calculational procedures are compared for predicting cleavage arrest of a deep crack in the wall of a prototypical reactor pressure vessel (RPV) subjected to pressurized-thermal-shock (PTS) types of loading conditions. Three procedures examined in this study utilized the following models: (1) a static finite-element model (full bending); (2) a radially constrained static model; and (3) a thermoelastic dynamic finite-element model. A PTS transient loading condition was selected that produced a deep arrest of an axially-oriented initially shallow crack according to calculational results obtained from the static (full-bending) model. Results from the two static models were compared with those generated from the detailed thermoelastic dynamic finite-element analysis. The dynamic analyses modeled cleavage-crack propagation using node-release technique and an application-mode methodology based on dynamic fracture toughness curves generated from measured data. Comparisons presented here indicate that the degree to which dynamic solutions can be approximated by static models is highly dependent on several factors, including the material dynamic fracture curves and the propensity for cleavage reinitiation of the arrested crack under PTS loading conditions. Additional work is required to develop and validate a satisfactory dynamic fracture toughness model applicable to postcleavage arrest conditions in an RPV

  15. Measuring Effectiveness in Digital Game-Based Learning: A Methodological Review.

    Directory of Open Access Journals (Sweden)

    Anissa All

    2014-06-01

    Full Text Available In recent years, a growing number of studies are being conducted into the effectiveness of digital game-based learning (DGBL. Despite this growing interest, there is a lack of sound empirical evidence on the effectiveness of DGBL due to different outcome measures for assessing effectiveness, varying methods of data collection and inconclusive or difficult to interpret results. This has resulted in a need for an overarching methodology for assessing the effectiveness of DGBL. The present study took a first step in this direction by mapping current methods used for assessing the effectiveness of DGBL. Results showed that currently, comparison of results across studies and thus looking at effectiveness of DGBL on a more general level is problematic due to diversity in and suboptimal study designs. Variety in study design relates to three issues, namely different activities that are implemented in the control groups, different measures for assessing the effectiveness of DGBL and the use of different statistical techniques for analyzing learning outcomes. Suboptimal study designs are the result of variables confounding study results. Possible confounds that were brought forward in this review are elements that are added to the game as part of the educational intervention (e.g., required reading, debriefing session, instructor influences and practice effects when using the same test pre- and post-intervention. Lastly, incomplete information on the study design impedes replication of studies and thus falsification of study results.

  16. Enabling Mobile Communications for the Needy: Affordability Methodology, and Approaches to Requalify Universal Service Measures

    Directory of Open Access Journals (Sweden)

    Louis-Francois PAU

    2009-01-01

    Full Text Available This paper links communications and media usage to social and household economics boundaries. It highlights that in present day society, communications and media are a necessity, but not always affordable, and that they furthermore open up for addictive behaviors which raise additional financial and social risks. A simple and efficient methodology compatible with state-of-the-art social and communications business statistics is developed, which produces the residual communications and media affordability budget and ultimately the value-at-risk in terms of usage and tariffs. Sensitivity analysis provides precious information on bottom-up communications and media adoption on the basis of affordability. This approach differs from the regulated but often ineffective Universal service obligation, which instead of catering for individual needs mostly addresses macro-measures helping geographical access coverage (e.g. in rural areas. It is proposed to requalify the Universal service obligations on operators into concrete measures, allowing, with unchanged funding, the needy to adopt mobile services based on their affordability constraints by bridging the gap to a standard tariff. Case data are surveyed from various countries. ICT policy recommendations are made to support widespread and socially responsible communications access.

  17. Optimization of conditions for probiotic curd formulation by Enterococcus faecium MTCC 5695 with probiotic properties using response surface methodology.

    Science.gov (United States)

    Ramakrishnan, Vrinda; Goveas, Louella Concepta; Prakash, Maya; Halami, Prakash M; Narayan, Bhaskar

    2014-11-01

    Enterococcus faecium MTCC 5695 possessing potential probiotic properties as well as enterocin producing ability was used as starter culture. Effect of time (12-24 h) and inoculum level (3-7 % v/v) on cell growth, bacteriocin production, antioxidant property, titrable acidity and pH of curd was studied by response surface methodology (RSM). The optimized conditions were 26.48 h and 2.17%v/v inoculum and the second order model validated. Co cultivation studies revealed that the formulated product had the ability to prevent growth of foodborne pathogens that affect keeping quality of the product during storage. The results indicated that application of E. faecium MTCC 5695 along with usage of optimized conditions attributed to the formation of highly consistent well set curd with bioactive and bioprotective properties. Formulated curd with potential probiotic attributes can be used as therapeutic agent for the treatment of foodborne diseases like Traveler's diarrhea and gastroenteritis which thereby help in improvement of bowel health.

  18. Statistical optimization of ultraviolet irradiate conditions for vitamin D₂ synthesis in oyster mushrooms (Pleurotus ostreatus using response surface methodology.

    Directory of Open Access Journals (Sweden)

    Wei-Jie Wu

    Full Text Available Response surface methodology (RSM was used to determine the optimum vitamin D2 synthesis conditions in oyster mushrooms (Pleurotus ostreatus. Ultraviolet B (UV-B was selected as the most efficient irradiation source for the preliminary experiment, in addition to the levels of three independent variables, which included ambient temperature (25-45°C, exposure time (40-120 min, and irradiation intensity (0.6-1.2 W/m2. The statistical analysis indicated that, for the range which was studied, irradiation intensity was the most critical factor that affected vitamin D2 synthesis in oyster mushrooms. Under optimal conditions (ambient temperature of 28.16°C, UV-B intensity of 1.14 W/m2, and exposure time of 94.28 min, the experimental vitamin D2 content of 239.67 µg/g (dry weight was in very good agreement with the predicted value of 245.49 µg/g, which verified the practicability of this strategy. Compared to fresh mushrooms, the lyophilized mushroom powder can synthesize remarkably higher level of vitamin D2 (498.10 µg/g within much shorter UV-B exposure time (10 min, and thus should receive attention from the food processing industry.

  19. Characterization of Melanogenesis Inhibitory Constituents of Morus alba Leaves and Optimization of Extraction Conditions Using Response Surface Methodology.

    Science.gov (United States)

    Jeong, Ji Yeon; Liu, Qing; Kim, Seon Beom; Jo, Yang Hee; Mo, Eun Jin; Yang, Hyo Hee; Song, Dae Hye; Hwang, Bang Yeon; Lee, Mi Kyeong

    2015-05-14

    Melanin is a natural pigment that plays an important role in the protection of skin, however, hyperpigmentation cause by excessive levels of melatonin is associated with several problems. Therefore, melanogenesis inhibitory natural products have been developed by the cosmetic industry as skin medications. The leaves of Morus alba (Moraceae) have been reported to inhibit melanogenesis, therefore, characterization of the melanogenesis inhibitory constituents of M. alba leaves was attempted in this study. Twenty compounds including eight benzofurans, 10 flavonoids, one stilbenoid and one chalcone were isolated from M. alba leaves and these phenolic constituents were shown to significantly inhibit tyrosinase activity and melanin content in B6F10 melanoma cells. To maximize the melanogenesis inhibitory activity and active phenolic contents, optimized M. alba leave extraction conditions were predicted using response surface methodology as a methanol concentration of 85.2%; an extraction temperature of 53.2 °C and an extraction time of 2 h. The tyrosinase inhibition and total phenolic content under optimal conditions were found to be 74.8% inhibition and 24.8 μg GAE/mg extract, which were well-matched with the predicted values of 75.0% inhibition and 23.8 μg GAE/mg extract. These results shall provide useful information about melanogenesis inhibitory constituents and optimized extracts from M. alba leaves as cosmetic therapeutics to reduce skin hyperpigmentation.

  20. Characterization of Melanogenesis Inhibitory Constituents of Morus alba Leaves and Optimization of Extraction Conditions Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Ji Yeon Jeong

    2015-05-01

    Full Text Available Melanin is a natural pigment that plays an important role in the protection of skin, however, hyperpigmentation cause by excessive levels of melatonin is associated with several problems. Therefore, melanogenesis inhibitory natural products have been developed by the cosmetic industry as skin medications. The leaves of Morus alba (Moraceae have been reported to inhibit melanogenesis, therefore, characterization of the melanogenesis inhibitory constituents of M. alba leaves was attempted in this study. Twenty compounds including eight benzofurans, 10 flavonoids, one stilbenoid and one chalcone were isolated from M. alba leaves and these phenolic constituents were shown to significantly inhibit tyrosinase activity and melanin content in B6F10 melanoma cells. To maximize the melanogenesis inhibitory activity and active phenolic contents, optimized M. alba leave extraction conditions were predicted using response surface methodology as a methanol concentration of 85.2%; an extraction temperature of 53.2 °C and an extraction time of 2 h. The tyrosinase inhibition and total phenolic content under optimal conditions were found to be 74.8% inhibition and 24.8 μg GAE/mg extract, which were well-matched with the predicted values of 75.0% inhibition and 23.8 μg GAE/mg extract. These results shall provide useful information about melanogenesis inhibitory constituents and optimized extracts from M. alba leaves as cosmetic therapeutics to reduce skin hyperpigmentation.

  1. Experimental assessment for instantaneous temperature and heat flux measurements under Diesel motored engine conditions

    International Nuclear Information System (INIS)

    Torregrosa, A.J.; Bermúdez, V.; Olmeda, P.; Fygueroa, O.

    2012-01-01

    Higlights: ► We measured in-cylinder wall heat fluxes. ► We examine the effects of different engine parameters. ► Increasing air mass flow increase heat fluxes. ► The effect of engine speed can be masked by the effect of volumetric efficiency. ► Differences among the different walls have been found. - Abstract: The main goal of this work is to validate an innovative experimental facility and to establish a methodology to evaluate the influence of some of the engine parameters on local engine heat transfer behaviour under motored steady-state conditions. Instantaneous temperature measurements have been performed in order to estimate heat fluxes on a modified Diesel single cylinder combustion chamber. This study was divided into two main parts. The first one was the design and setting on of an experimental bench to reproduce Diesel conditions and perform local-instantaneous temperature measurements along the walls of the combustion chamber by means of fast response thermocouples. The second one was the development of a procedure for temperature signal treatment and local heat flux calculation based on one-dimensional Fourier analysis. A thermodynamic diagnosis model has been employed to characterise the modified engine with the new designed chamber. As a result of the measured data coherent findings have been obtained in order to understand local behaviour of heat transfer in an internal combustion engine, and the influence of engine parameters on local instantaneous temperature and heat flux, have been analysed.

  2. The influences of bowel condition with lumbar spine BMD measurement

    International Nuclear Information System (INIS)

    Yoon, Joon; Lee, Hoo Min; Lee, Jung Min; Kwon, Soon Mu; Cho, Hyung Wook; Kim, Yun Min; Kang, Yeong Han; Kim, Boo Soon; Kim, Jung Soo

    2014-01-01

    Bone density measurement use of diagnosis of osteoporosis and it is an important indicator for treatment as well as prevention. But errors in degree of precision of BMD can be occurred by status of patient, bone densitometer and radiological technologist. Therefore the author evaluated that how BMD changes according to the condition of the patient. As Lumbar region, which could lead to substantial effects on bone density by diverse factors such as the water, food, intentional bowels. We recognized a change of bone mineral density in accordance with the height of the water tank and in the presence or absence of the gas using the Aluminum Spine Phantom. We also figured out the influence of bone mineral density by increasing the water and food into a target on the volunteers. Measured bone mineral density through Aluminum Spine Phantom had statistically significant difference accordance with increasing the height of water tank(p=0.026). There was no significant difference in BMD according to the existence of the bowl gas(p=0.587). There was no significant difference in a study of six people targeted volunteers in the presence or absence of the food(p=0.812). And also there was no significant difference according to the existence of water(p=0.618). If it is not difficult to recognize the surround of bone in measuring BMD of lumbar bone, it is not the factor which has the great effect on bone mineral density whether the test is after endoscopic examination of large intestine and patient’s fast or not

  3. The influences of bowel condition with lumbar spine BMD measurement

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Joon; Lee, Hoo Min; Lee, Jung Min; Kwon, Soon Mu; Cho, Hyung Wook [Dept. of Radiologic Technology, Dongnam Health College, Suwon (Korea, Republic of); Kim, Yun Min; Kang, Yeong Han; Kim, Boo Soon; Kim, Jung Soo [Dept. of Diagonostic Radiology, Samsung Medical Center, Seoul (Korea, Republic of)

    2014-12-15

    Bone density measurement use of diagnosis of osteoporosis and it is an important indicator for treatment as well as prevention. But errors in degree of precision of BMD can be occurred by status of patient, bone densitometer and radiological technologist. Therefore the author evaluated that how BMD changes according to the condition of the patient. As Lumbar region, which could lead to substantial effects on bone density by diverse factors such as the water, food, intentional bowels. We recognized a change of bone mineral density in accordance with the height of the water tank and in the presence or absence of the gas using the Aluminum Spine Phantom. We also figured out the influence of bone mineral density by increasing the water and food into a target on the volunteers. Measured bone mineral density through Aluminum Spine Phantom had statistically significant difference accordance with increasing the height of water tank(p=0.026). There was no significant difference in BMD according to the existence of the bowl gas(p=0.587). There was no significant difference in a study of six people targeted volunteers in the presence or absence of the food(p=0.812). And also there was no significant difference according to the existence of water(p=0.618). If it is not difficult to recognize the surround of bone in measuring BMD of lumbar bone, it is not the factor which has the great effect on bone mineral density whether the test is after endoscopic examination of large intestine and patient’s fast or not.

  4. Development of Assessment Methodology of Chemical Behavior of Volatile Iodide under Severe Accident Conditions Using EPICUR Experiments

    International Nuclear Information System (INIS)

    Oh, Jae Yong; Yun, Jong Il; Kim, Do Sam; Han Chul

    2011-01-01

    Iodine is one of the most important fission products produced in nuclear power plants. Under severe accident condition, iodine exists as a variety of species in the containment such as aqueous iodide, gaseous iodide, iodide aerosol, etc. Following release of iodine from the reactor, mostly in the form of CsI aerosol, volatile iodine can be generated from the containment sump and release to the environment. Especially, volatile organic iodide can be produced from interaction between nonvolatile iodine and organic substances present in the containment. Volatile iodide could significantly influence the alienated residents surrounding the nuclear power plant. In particular, thyroid is vulnerable to radioiodine due to its high accumulation. Therefore, it is necessary for the Korea Institute of Nuclear Safety (KINS) to develop an evaluation model which can simulate iodine behavior in the containment following a severe accident. KINS also needs to make up its methodology for radiological consequence analysis, based on MELCOR-MACCS2 calculation, by coupling a simple iodine model which can conveniently deal with organic iodides. In the long term, such a model can contribute to develop an accident source term, which is one of urgent domestic needs. Our strategy for developing the model is as follows: 1. Review the existing methodologies, 2. Develop a simple stand-alone model, 3. Validate the model using ISTP-EPICUR (Experimental Program on Iodine Chemistry under Radiation) and OECD-BIP (Behavior of Iodine Project) experimental data. In this paper we present the context of development and validation of our model named RAIM (Radio-active iodine chemistry model)

  5. Development of Assessment Methodology of Chemical Behavior of Volatile Iodide under Severe Accident Conditions Using EPICUR Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae Yong; Yun, Jong Il [KAIST, Daejeon (Korea, Republic of); Kim, Do Sam; Han Chul [Korea Institue of Nuclear Safety, Daejeon (Korea, Republic of)

    2011-05-15

    Iodine is one of the most important fission products produced in nuclear power plants. Under severe accident condition, iodine exists as a variety of species in the containment such as aqueous iodide, gaseous iodide, iodide aerosol, etc. Following release of iodine from the reactor, mostly in the form of CsI aerosol, volatile iodine can be generated from the containment sump and release to the environment. Especially, volatile organic iodide can be produced from interaction between nonvolatile iodine and organic substances present in the containment. Volatile iodide could significantly influence the alienated residents surrounding the nuclear power plant. In particular, thyroid is vulnerable to radioiodine due to its high accumulation. Therefore, it is necessary for the Korea Institute of Nuclear Safety (KINS) to develop an evaluation model which can simulate iodine behavior in the containment following a severe accident. KINS also needs to make up its methodology for radiological consequence analysis, based on MELCOR-MACCS2 calculation, by coupling a simple iodine model which can conveniently deal with organic iodides. In the long term, such a model can contribute to develop an accident source term, which is one of urgent domestic needs. Our strategy for developing the model is as follows: 1. Review the existing methodologies, 2. Develop a simple stand-alone model, 3. Validate the model using ISTP-EPICUR (Experimental Program on Iodine Chemistry under Radiation) and OECD-BIP (Behavior of Iodine Project) experimental data. In this paper we present the context of development and validation of our model named RAIM (Radio-active iodine chemistry model)

  6. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

    NARCIS (Netherlands)

    Terwee, C.B.; Mokkink, L.B.; Knol, D.L.; Ostelo, R.W.J.G.; Bouter, L.M.; de Vet, H.C.W.

    2012-01-01

    Background: The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a

  7. Measurement and Modeling of Cucumber Evapotranspiration Under Greenhouse Condition

    Directory of Open Access Journals (Sweden)

    R. Moazenzadeh

    2017-01-01

    Full Text Available Introduction: In two last decades, greenhouse cultivation of different plants has developed among Iranian farmers, approximately 45 percent of national greenhouse cultures consisting of cucumber, tomato and pepper. As huge amounts of agricultural water in Iran are extracted from groundwater resources and a large number of Iranian plains are in critical conditions, and because irrigation is the major consumer of water (95 percent, it must be performed in a scientific manner. One approach to this is to obtain the knowledge of the consumptive use of major crops which is named evapotranspiration (ETc. Materials and Methods: This research was carried out in a north-south greenhouse belonging to Plant Protection Research Institute, located on northern Tehran, Iran, for estimating greenhouse cucumber evapotranspiration. Trickle irrigation method was used, and meteorological data such as temperature, humidity and solar radiation were measured daily. Physical and chemical measurements were conducted and electric conductivity (EC and pH values of 3.42 dsm-1 and 7.19, respectively, were recorded. Soil texture and bulk density were measured as to be sandy loam and 1.4 gr cm-3, respectively. In order to measure the actual evapotranspiration, cucumber seeds were also cultured in six similar microlysimeters and irrigation of each microlysimeter was based on FC moisture. If any drained water was available, it was measured. Finally, with measured meteorological characteristics in greenhouse which are suggested to have an effect on ET and were measurable, the best multiple linear regression and artificial neural network were established. The average data from three microlysimeters were used for calibration and that from three other microlysimeters were used for validation set. Results and Discussion: In the former case, when we used one multiple linear regression with measurable meteorological variables inside the greenhouse to predict cucumber ET for the entire

  8. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    Science.gov (United States)

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Innovative Methodologies for thermal Energy Release Measurement: case of La Solfatara volcano (Italy)

    Science.gov (United States)

    Marfe`, Barbara; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Marotta, Enrica; Peluso, Rosario

    2015-04-01

    This work is devoted to improve the knowledge on the parameters that control the heat flux anomalies associated with the diffuse degassing processes of volcanic and hydrothermal areas. The methodologies currently used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. A new method, based on the use of thermal imaging cameras, has been applied to estimate the heat flux and its time variations. This approach will allow faster heat flux measurement than already accredited methods, improving in this way the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The idea is to extrapolate the heat flux from the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. We use thermal imaging cameras, at short distances (meters to hundreds of meters), to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature. Preliminary studies have been carried out throughout the whole of the La Solfatara crater in order to investigate a possible correlation between the surface temperature and the shallow thermal gradient. We have used a FLIR SC640 thermal camera and K type thermocouples to assess the two measurements at the same time. Results suggest a good correlation between the shallow temperature gradient ΔTs and the surface temperature Ts depurated from background, and despite the campaigns took place during a period of time of a few years, this correlation seems to be stable over the time. This is an extremely motivating result for a further development of a measurement method based only on the use of small range thermal imaging camera. Surveys with thermal cameras may be manually done using a tripod to take thermal images of small contiguous areas and then joining

  10. Measuring resource inequalities. The concepts and methodology for an area-based Gini coefficient

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Although inequalities in income and expenditure are relatively well researched, comparatively little attention has been paid, to date, to inequalities in resource use. This is clearly a shortcoming when it comes to developing informed policies for sustainable consumption and social justice. This paper describes an indicator of inequality in resource use called the AR-Gini. The AR-Gini is an area-based measure of resource inequality that estimates inequalities between neighbourhoods with regard to the consumption of specific consumer goods. It is also capable of estimating inequalities in the emissions resulting from resource use, such as carbon dioxide emissions from energy use, and solid waste arisings from material resource use. The indicator is designed to be used as a basis for broadening the discussion concerning 'food deserts' to inequalities in other types of resource use. By estimating the AR-Gini for a wide range of goods and services we aim to enhance our understanding of resource inequalities and their drivers, identify which resources have highest inequalities, and to explore trends in inequalities. The paper describes the concepts underlying the construction of the AR-Gini and its methodology. Its use is illustrated by pilot applications (specifically, men's and boys' clothing, carpets, refrigerators/freezers and clothes washer/driers). The results illustrate that different levels of inequality are associated with different commodities. The paper concludes with a brief discussion of some possible policy implications of the AR-Gini. (author)

  11. Thermal-Diffusivity Measurements of Mexican Citrus Essential Oils Using Photoacoustic Methodology in the Transmission Configuration

    Science.gov (United States)

    Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.

    2011-05-01

    Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.

  12. Measurements of phoretic velocities of aerosol particles in microgravity conditions

    Science.gov (United States)

    Prodi, F.; Santachiara, G.; Travaini, S.; Vedernikov, A.; Dubois, F.; Minetti, C.; Legros, J. C.

    2006-11-01

    Measurements of thermo- and diffusio-phoretic velocities of aerosol particles (carnauba wax, paraffin and sodium chloride) were performed in microgravity conditions (Drop Tower facility, in Bremen, and Parabolic Flights, in Bordeaux). In the case of thermophoresis, a temperature gradient was obtained by heating the upper plate of the cell, while the lower one was maintained at environmental temperature. For diffusiophoresis, the water vapour gradient was obtained with sintered plates imbued with a water solution of MgCl 2 and distilled water, at the top and at the bottom of the cell, respectively. Aerosol particles were observed through a digital holographic velocimeter, a device allowing the determination of 3-D coordinates of particles from the observed volume. Particle trajectories and consequently particle velocities were reconstructed through the analysis of the sequence of particle positions. The experimental values of reduced thermophoretic velocities are between the theoretical values of Yamamoto and Ishihara [Yamamoto, K., Ishihara, Y., 1988. Thermophoresis of a spherical particle in a rarefied gas of a transition regime. Phys. Fluids. 31, 3618-3624] and Talbot et al. [Talbot, L., Cheng, R.K., Schefer, R.W., Willis, D.R., 1980. Thermophoresis of particles in a heated boundary layer. J. Fluid Mech. 101, 737-758], and do not show a clear dependence on the thermal conductivity of the aerosol. The existence of negative thermophoresis is not confirmed in our experiments. Concerning diffusiophoretic experiments, the results obtained show a small increase of reduced diffusiophoretic velocity with the Knudsen number.

  13. Upgraded Fast Beam Conditions Monitor for CMS online luminosity measurement

    CERN Document Server

    Leonard, Jessica Lynn; Hempel, Maria; Henschel, Hans; Karacheban, Olena; Lange, Wolfgang; Lohmann, Wolfgang; Novgorodova, Olga; Penno, Marek; Walsh, Roberval; Dabrowski, Anne; Guthoff, Moritz; Loos, R; Ryjov, Vladimir; Burtowy, Piotr; Lokhovitskiy, Arkady; Odell, Nathaniel; Przyborowski, Dominik; Stickland, David P; Zagozdzinska, Agnieszka

    2014-01-01

    The CMS beam condition monitoring subsystem BCM1F during LHC Run I consisted of 8 individual diamond sensors situated around the beam pipe within the tracker detector volume, for the purpose of fast monitoring of beam background and collision products. Effort is ongoing to develop the use of BCM1F as an online bunch-by-bunch luminosity monitor. BCM1F will be running whenever there is beam in LHC, and its data acquisition is independent from the data acquisition of the CMS detector, hence it delivers luminosity even when CMS is not taking data. To prepare for the expected increase in the LHC luminosity and the change from 50 ns to 25 ns bunch separation, several changes to the system are required, including a higher number of sensors and upgraded electronics. In particular, a new real-time digitizer with large memory was developed and is being integrated into a multi-subsystem framework for luminosity measurement. Current results from Run II preparation will be discussed, including results from the January 201...

  14. Numerical simulation and analysis of fuzzy PID and PSD control methodologies as dynamic energy efficiency measures

    International Nuclear Information System (INIS)

    Ardehali, M.M.; Saboori, M.; Teshnelab, M.

    2004-01-01

    Energy efficiency enhancement is achieved by utilizing control algorithms that reduce overshoots and undershoots as well as unnecessary fluctuations in the amount of energy input to energy consuming systems during transient operation periods. It is hypothesized that application of control methodologies with characteristics that change with time and according to the system dynamics, identified as dynamic energy efficiency measures (DEEM), achieves the desired enhancement. The objective of this study is to simulate and analyze the effects of fuzzy logic based tuning of proportional integral derivative (F-PID) and proportional sum derivative (F-PSD) controllers for a heating and cooling energy system while accounting for the dynamics of the major system components. The procedure to achieve the objective includes utilization of fuzzy logic rules to determine the PID and PSD controllers gain coefficients so that the control laws for regulating the heat exchangers heating or cooling energy inputs are determined in each time step of the operation period. The performances of the F-PID and F-PSD controllers are measured by means of two cost functions that are based on quadratic forms of the energy input and deviation from a set point temperature. It is found that application of the F-PID control algorithm, as a DEEM, results in lower costs for energy input and deviation from a set point temperature by 24% and 17% as compared to a PID and 13% and 8% as compared to a PSD, respectively. It is also shown that the F-PSD performance is better than that of the F-PID controller

  15. Optimization of synthesis conditions of PbS thin films grown by chemical bath deposition using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Yücel, Ersin, E-mail: dr.ersinyucel@gmail.com [Department of Physics, Faculty of Arts and Sciences, Mustafa Kemal University, 31034 Hatay (Turkey); Yücel, Yasin; Beleli, Buse [Department of Chemistry, Faculty of Arts and Sciences, Mustafa Kemal University, 31034 Hatay (Turkey)

    2015-09-05

    Highlights: • For the first time, RSM and CCD used for optimization of PbS thin film. • Tri-sodium citrate, deposition time and temperature were independent variables. • PbS thin film band gap value was 2.20 eV under the optimum conditions. • Quality of the film was improved after chemometrics optimization. - Abstract: In this study, PbS thin films were synthesized by chemical bath deposition (CBD) under different deposition parameters. Response surface methodology (RSM) was used to optimize synthesis parameters including amount of tri-sodium citrate (0.2–0.8 mL), deposition time (14–34 h) and deposition temperature (26.6–43.4 °C) for deposition of the films. 5-level-3-factor central composite design (CCD) was employed to evaluate effects of the deposition parameters on the response (optical band gap of the films). The significant level of both the main effects and the interaction are investigated by analysis of variance (ANOVA). The film structures were characterized by X-ray diffractometer (XRD). Morphological properties of the films were studied with a scanning electron microscopy (SEM). The optical properties of the films were investigated using a UV–visible spectrophotometer. The optimum amount of tri-sodium citrate, deposition time and deposition temperature were found to be 0.7 mL, 18.07 h and 30 °C respectively. Under these conditions, the experimental band gap of PbS was 2.20 eV, which is quite good correlation with value (1.98 eV) predicted by the model.

  16. Optimization of synthesis conditions of PbS thin films grown by chemical bath deposition using response surface methodology

    International Nuclear Information System (INIS)

    Yücel, Ersin; Yücel, Yasin; Beleli, Buse

    2015-01-01

    Highlights: • For the first time, RSM and CCD used for optimization of PbS thin film. • Tri-sodium citrate, deposition time and temperature were independent variables. • PbS thin film band gap value was 2.20 eV under the optimum conditions. • Quality of the film was improved after chemometrics optimization. - Abstract: In this study, PbS thin films were synthesized by chemical bath deposition (CBD) under different deposition parameters. Response surface methodology (RSM) was used to optimize synthesis parameters including amount of tri-sodium citrate (0.2–0.8 mL), deposition time (14–34 h) and deposition temperature (26.6–43.4 °C) for deposition of the films. 5-level-3-factor central composite design (CCD) was employed to evaluate effects of the deposition parameters on the response (optical band gap of the films). The significant level of both the main effects and the interaction are investigated by analysis of variance (ANOVA). The film structures were characterized by X-ray diffractometer (XRD). Morphological properties of the films were studied with a scanning electron microscopy (SEM). The optical properties of the films were investigated using a UV–visible spectrophotometer. The optimum amount of tri-sodium citrate, deposition time and deposition temperature were found to be 0.7 mL, 18.07 h and 30 °C respectively. Under these conditions, the experimental band gap of PbS was 2.20 eV, which is quite good correlation with value (1.98 eV) predicted by the model

  17. Automated landmark extraction for orthodontic measurement of faces using the 3-camera photogrammetry methodology.

    Science.gov (United States)

    Deli, Roberto; Di Gioia, Eliana; Galantucci, Luigi Maria; Percoco, Gianluca

    2010-01-01

    To set up a three-dimensional photogrammetric scanning system for precise landmark measurements, without any physical contact, using a low-cost and noninvasive digital photogrammetric solution, for supporting several necessity in clinical orthodontics and/or surgery diagnosis. Thirty coded targets were directly applied onto the subject's face on the soft tissue landmarks, and then, 3 simultaneous photos were acquired using photogrammetry, at room light conditions. For comparison, a dummy head was digitized both with a photogrammetric technique and with the laser scanner Minolta Vivid 910i (Konica Minolta, Tokyo, Japan). The precise measurement of the landmarks is ranged between 0.017 and 0.029 mm. The system automatically measures spatial position of face landmarks, from which distances and angles can be obtained. The facial measurements were compared with those done using laser scanning and manual caliper. The adopted method gives higher precision than the others (0.022-mm mean value on points and 0.038-mm mean value on linear distances on a dummy head), is simple, and can be used easily as a standard routine. The study demonstrated the validity of photogrammetry for accurate digitization of human face landmarks. This research points out the potential of this low-cost photogrammetry approach for medical digitization.

  18. Optimization of the production conditions of the lipase produced by Bacillus cereus from rice flour through Plackett-Burman Design (PBD) and response surface methodology (RSM).

    Science.gov (United States)

    Vasiee, Alireza; Behbahani, Behrooz Alizadeh; Yazdi, Farideh Tabatabaei; Moradi, Samira

    2016-12-01

    In this study, the screening of lipase positive bacteria from rice flour was carried out by Rhodamin B agar plate method. Bacillus cereus was identified by 16S rDNA method. Screening of the appropriate variables and optimization of the lipase production was performed using Plackett-Burman design (PBD) and response surface methodology (RSM). Among the isolated bacteria, an aerobic Bacillus cereus strain was recognized as the best lipase-producing bacteria (177.3 ± 20 U/ml). Given the results, the optimal enzyme production conditions were achieved with coriander seed extract (CSE)/yeast extract ratio of 16.9 w/w, olive oil (OO) and MgCl 2 concentration of 2.37 g/L and 24.23 mM, respectively. In these conditions, the lipase activity (LA) was predicted 343 U/mL that was approximately close to the predicted value (324 U/mL), which was increased 1.83 fold LA compared with the non-optimized lipase. The kinetic parameters of V max and K m for the lipase were measured 0.367 μM/min.mL and 5.3 mM, respectively. The lipase producing Bacillus cereus was isolated and RSM was used for the optimization of enzyme production. The CSE/yeast extract ratio of 16.9 w/w, OO concentration of 2.37 g/L and MgCl 2 concentration of 24.23 mM, were found to be the optimal conditions of the enzyme production process. LA at optimal enzyme production conditions was observed 1.83 times more than the non-optimal conditions. Ultimately, it can be concluded that the isolated B. cereus from rice flour is a proper source of lipase. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Measuring the impact of methodological research: a framework and methods to identify evidence of impact.

    Science.gov (United States)

    Brueton, Valerie C; Vale, Claire L; Choodari-Oskooei, Babak; Jinks, Rachel; Tierney, Jayne F

    2014-11-27

    Providing evidence of impact highlights the benefits of medical research to society. Such evidence is increasingly requested by research funders and commonly relies on citation analysis. However, other indicators may be more informative. Although frameworks to demonstrate the impact of clinical research have been reported, no complementary framework exists for methodological research. Therefore, we assessed the impact of methodological research projects conducted or completed between 2009 and 2012 at the UK Medical Research Council Clinical Trials Unit Hub for Trials Methodology Research Hub, with a view to developing an appropriate framework. Various approaches to the collection of data on research impact were employed. Citation rates were obtained using Web of Science (http://www.webofknowledge.com/) and analyzed descriptively. Semistructured interviews were conducted to obtain information on the rates of different types of research output that indicated impact for each project. Results were then pooled across all projects. Finally, email queries pertaining to methodology projects were collected retrospectively and their content analyzed. Simple citation analysis established the citation rates per year since publication for 74 methodological publications; however, further detailed analysis revealed more about the potential influence of these citations. Interviews that spanned 20 individual research projects demonstrated a variety of types of impact not otherwise collated, for example, applications and further developments of the research; release of software and provision of guidance materials to facilitate uptake; formation of new collaborations and broad dissemination. Finally, 194 email queries relating to 6 methodological projects were received from 170 individuals across 23 countries. They provided further evidence that the methodologies were impacting on research and research practice, both nationally and internationally. We have used the information

  20. Net ecosystem carbon dioxide exchange in tropical rainforests - sensitivity to environmental drivers and flux measurement methodology

    Science.gov (United States)

    Fu, Z.; Stoy, P. C.

    2017-12-01

    Tropical rainforests play a central role in the Earth system services of carbon metabolism, climate regulation, biodiversity maintenance, and more. They are under threat by direct anthropogenic effects including deforestation and indirect anthropogenic effects including climate change. A synthesis of the factors that determine the net ecosystem exchange of carbon dioxide (NEE) across multiple time scales in different tropical rainforests has not been undertaken to date. Here, we study NEE and its components, gross primary productivity (GPP) and ecosystem respiration (RE), across thirteen tropical rainforest research sites with 63 total site-years of eddy covariance data. Results reveal that the five ecosystems that have greater carbon uptakes (with the magnitude of GPP greater than 3000 g C m-2 y-1) sequester less carbon - or even lose it - on an annual basis at the ecosystem scale. This counterintuitive result is because high GPP is compensated by similar magnitudes of RE. Sites that provided subcanopy CO2 storage observations had higher average magnitudes of GPP and RE and consequently lower NEE, highlighting the importance of measurement methodology for understanding carbon dynamics in tropical rainforests. Vapor pressure deficit (VPD) constrained GPP at all sites, but to differing degrees. Many environmental variables are significantly related to NEE at time scales greater than one year, and NEE at a rainforest in Malaysia is significantly related to soil moisture variability at seasonal time scales. Climate projections from 13 general circulation models (CMIP5) under representative concentration pathway (RCP) 8.5 suggest that many current tropical rainforest sites on the cooler end of the current temperature range are likely to reach a climate space similar to present-day warmer sites by the year 2050, and warmer sites will reach a climate space not currently experienced. Results demonstrate the need to quantify if mature tropical trees acclimate to heat and

  1. Methodological Considerations and Comparisons of Measurement Results for Extracellular Proteolytic Enzyme Activities in Seawater

    Directory of Open Access Journals (Sweden)

    Yumiko Obayashi

    2017-10-01

    Full Text Available Microbial extracellular hydrolytic enzymes that degrade organic matter in aquatic ecosystems play key roles in the biogeochemical carbon cycle. To provide linkages between hydrolytic enzyme activities and genomic or metabolomic studies in aquatic environments, reliable measurements are required for many samples at one time. Extracellular proteases are one of the most important classes of enzymes in aquatic microbial ecosystems, and protease activities in seawater are commonly measured using fluorogenic model substrates. Here, we examined several concerns for measurements of extracellular protease activities (aminopeptidases, and trypsin-type, and chymotrypsin-type activities in seawater. Using a fluorometric microplate reader with low protein binding, 96-well microplates produced reliable enzymatic activity readings, while use of regular polystyrene microplates produced readings that showed significant underestimation, especially for trypsin-type proteases. From the results of kinetic experiments, this underestimation was thought to be attributable to the adsorption of both enzymes and substrates onto the microplate. We also examined solvent type and concentration in the working solution of oligopeptide-analog fluorogenic substrates using dimethyl sulfoxide (DMSO and 2-methoxyethanol (MTXE. The results showed that both 2% (final concentration of solvent in the mixture of seawater sample and substrate working solution DMSO and 2% MTXE provide similarly reliable data for most of the tested substrates, except for some substrates which did not dissolve completely in these assay conditions. Sample containers are also important to maintain the level of enzyme activity in natural seawater samples. In a small polypropylene containers (e.g., standard 50-mL centrifugal tube, protease activities in seawater sample rapidly decreased, and it caused underestimation of natural activities, especially for trypsin-type and chymotrypsin-type proteases. In

  2. Measurement of leukocyte rheology in vascular disease: clinical rationale and methodology. International Society of Clinical Hemorheology.

    Science.gov (United States)

    Wautier, J L; Schmid-Schönbein, G W; Nash, G B

    1999-01-01

    The measurement of leukocyte rheology in vascular disease is a recent development with a wide range of new opportunities. The International Society of Clinical Hemorheology has asked an expert panel to propose guidelines for the investigation of leukocyte rheology in clinical situations. This article first discusses the mechanical, adhesive and related functional properties of leukocytes (especially neutrophils) which influence their circulation, and establishes the rationale for clinically-related measurements of parameters which describe them. It is concluded that quantitation of leukocyte adhesion molecules, and of their endothelial receptors may assist understanding of leukocyte behaviour in vascular disease, along with measurements of flow resistance of leukocytes, free radical production, degranulation and gene expression. For instance, vascular cell adhesion molecule (VCAM-1) is abnormally present on endothelial cells in atherosclerosis, diabetes mellitus and inflammatory conditions. Soluble forms of intercellular adhesion molecule (ICAM-1) or VCAM can be found elevated in the blood of patients with rheumatoid arthritis or infections disease. In the second part of the article, possible technical approaches are presented and possible avenues for leukocyte rheological investigations are discussed.

  3. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist.

    Science.gov (United States)

    Terwee, Caroline B; Mokkink, Lidwine B; Knol, Dirk L; Ostelo, Raymond W J G; Bouter, Lex M; de Vet, Henrica C W

    2012-05-01

    The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties.

  4. Measuring sporadic gastrointestinal illness associated with drinking water - an overview of methodologies.

    Science.gov (United States)

    Bylund, John; Toljander, Jonas; Lysén, Maria; Rasti, Niloofar; Engqvist, Jannes; Simonsson, Magnus

    2017-06-01

    There is an increasing awareness that drinking water contributes to sporadic gastrointestinal illness (GI) in high income countries of the northern hemisphere. A literature search was conducted in order to review: (1) methods used for investigating the effects of public drinking water on GI; (2) evidence of possible dose-response relationship between sporadic GI and drinking water consumption; and (3) association between sporadic GI and factors affecting drinking water quality. Seventy-four articles were selected, key findings and information gaps were identified. In-home intervention studies have only been conducted in areas using surface water sources and intervention studies in communities supplied by ground water are therefore needed. Community-wide intervention studies may constitute a cost-effective alternative to in-home intervention studies. Proxy data that correlate with GI in the community can be used for detecting changes in the incidence of GI. Proxy data can, however, not be used for measuring the prevalence of illness. Local conditions affecting water safety may vary greatly, making direct comparisons between studies difficult unless sufficient knowledge about these conditions is acquired. Drinking water in high-income countries contributes to endemic levels of GI and there are public health benefits for further improvements of drinking water safety.

  5. Symbiotic dinitrogen fixation measurement in vetch-barley mixed swards using 15 N methodology

    International Nuclear Information System (INIS)

    Kurdali, F.; Sharabi, N.E.

    1995-01-01

    Field experiment on vetch and barley grown in monoculture and in mixed culture (3:1) under rain-fed conditions was conducted in 1991-1992 and 1992-1993 growing season. Three harvests were effectuated on one treatment throughout the growing season. While, other plots were harvested once at physiological maturity stage. Our results showed the importance of mixed cropping system of vetch and barley grown under rain fed conditions in terms of dry matter production, total nitrogen content and land use efficiency expressed as land equivalent ration (L.E.R). This advantage is clear in the plants harvested once at the end of the season. Therefore, it is important to grow legumes and cereals under rain fed conditions and to be left until late stage of growth and fed by animals directly. On the other hand, only two harvests could be done in the season with no additional harvests because this may weaken the plant growth, and as a result of the last approach we will obtained poor production due to unpredicated an appropriate rain fall after the second harvest (April). Nitrogen fixation efficiency in vetch measured by sup 1 sup 5 N isotop dilution method varied with the number of harvests and the procedure adopted in culture. Comparing the results of %Ndfa of vetch between monoculture and mixed culture showed that the values in most cases were higher in mixed culture. The competition between vetch and barley in the mixed stand for soil N-uptake made the barley supplements its N requirements from soil. The poor competitiveness of vetch capability for soil N-uptake enhanced it to fix more nitrogen. On the other hand, N residual after harvest was higher in the mixed treatment than the others. Positive and high final nitrogen balance were observed with the inclusion of vetch in the mixture. We excluded, under the current experimental conditions, the possibility of N-transfer from vetch to barley due to the insignificant differences in the value of sup 1 sup 5 N atom excess for

  6. Symbiotic dinitrogen fixation measurement in vetch-barley mixed swards using {sup 15} N methodology

    Energy Technology Data Exchange (ETDEWEB)

    Kurdali, F; Sharabi, N E [Atomic Energy Commission, Damascus (Syrian Arab Republic). Dept. of Radiation Agriculture

    1995-01-01

    Field experiment on vetch and barley grown in monoculture and in mixed culture (3:1) under rain-fed conditions was conducted in 1991-1992 and 1992-1993 growing season. Three harvests were effectuated on one treatment throughout the growing season. Our results showed the importance of mixed cropping system of vetch and barley grown under rain fed conditions in terms of dry matter production, total nitrogen content and land use efficiency expressed as land equivalent ration (L.E.R). This advantage is clear in the plants harvested once at the end of the season. Therefore, it is important to grow legumes and cereals under rain fed conditions and to be left until late stage of growth and fed by animals directly. On the other hand, only two harvests could be done in the season with no additional harvests because this may weaken the plant growth, and as a result of the last approach we obtained poor production due to unpredicated an appropriate rain fall after the second harvest (April). Nitrogen fixation efficiency in vetch measured by {sup 1 5} N isotope dilution method varied with the number of harvests and the procedure adopted in culture. Comparing the results of %Ndfa of vetch between monoculture and mixed culture showed that the values in most cases were higher in mixed culture. The competition between vetch and barley in the mixed stand for soil N-uptake made the barley supplements its N requirements from soil. The poor competitiveness of vetch capability for soil N-uptake enhanced it to fix more nitrogen. On the other hand, N residual after harvest was higher in the mixed treatment than the others. Positive and high final nitrogen balance were observed with the inclusion of vetch in the mixture. We excluded, under the current experimental conditions, the possibility of N-transfer from vetch to barley due to the insignificant differences in the value of {sup 1 5} N atom excess for barley between the two types of farming. 35 refs., 2 figs., 15 tabs.

  7. Validity and reliability of using photography for measuring knee range of motion: a methodological study

    Directory of Open Access Journals (Sweden)

    Adie Sam

    2011-04-01

    Full Text Available Abstract Background The clinimetric properties of knee goniometry are essential to appreciate in light of its extensive use in the orthopaedic and rehabilitative communities. Intra-observer reliability is thought to be satisfactory, but the validity and inter-rater reliability of knee goniometry often demonstrate unacceptable levels of variation. This study tests the validity and reliability of measuring knee range of motion using goniometry and photographic records. Methods Design: Methodology study assessing the validity and reliability of one method ('Marker Method' which uses a skin marker over the greater trochanter and another method ('Line of Femur Method' which requires estimation of the line of femur. Setting: Radiology and orthopaedic departments of two teaching hospitals. Participants: 31 volunteers (13 arthritic and 18 healthy subjects. Knee range of motion was measured radiographically and photographically using a goniometer. Three assessors were assessed for reliability and validity. Main outcomes: Agreement between methods and within raters was assessed using concordance correlation coefficient (CCCs. Agreement between raters was assessed using intra-class correlation coefficients (ICCs. 95% limits of agreement for the mean difference for all paired comparisons were computed. Results Validity (referenced to radiographs: Each method for all 3 raters yielded very high CCCs for flexion (0.975 to 0.988, and moderate to substantial CCCs for extension angles (0.478 to 0.678. The mean differences and 95% limits of agreement were narrower for flexion than they were for extension. Intra-rater reliability: For flexion and extension, very high CCCs were attained for all 3 raters for both methods with slightly greater CCCs seen for flexion (CCCs varied from 0.981 to 0.998. Inter-rater reliability: For both methods, very high ICCs (min to max: 0.891 to 0.995 were obtained for flexion and extension. Slightly higher coefficients were obtained

  8. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios.

    Science.gov (United States)

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; van Riet, M M J; Visser, P; Blok, M C; Hendriks, W H

    2017-11-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE) basis) diets (i.e. 22 MJ NE/day) with increasing proportions of a pelleted concentrate (C) in relation to haylage (H). The absolute amounts of diet dry matter fed per day were 4.48 kg of H (100H), 3.36 and 0.73 kg of H and C (75H25C), 2.24 and 1.45 kg of H and C (50H50C) and 1.12 and 2.17 kg of H and C (25H75C). Diets were supplemented with minerals, vitamins and TiO2 (3.7 g Ti/day). Voluntary voided faeces were quantitatively collected daily during 10 consecutive days and analysed for moisture, ash, ADL, acid-insoluble ash (AIA) and Ti. A minimum faeces collection period of 6 consecutive days, along with a 14-day period to adapt the animals to the diets and become accustomed to the collection procedure, is recommended to obtain accurate estimations on dry matter digestibility and organic matter digestibility (OMD) in equids fed haylage-based diets supplemented with concentrate. In addition, the recovery of AIA, ADL and Ti was determined and evaluated. Mean faecal recovery over 10 consecutive days across diets for AIA, ADL and Ti was 124.9% (SEM 2.9), 108.7% (SEM 2.0) and 97.5% (SEM 0.9), respectively. Cumulative faecal recovery of AIA significantly differed between treatments, indicating that AIA is inadequate to estimate the OMD in equines. In addition, evaluation of the CV of mean cumulative faecal recoveries obtained by AIA, ADL and Ti showed greater variations in faecal excretion of AIA (9.1) and ADL (7.4) than Ti (3.7). The accuracy of prediction of OMD was higher with the use of Ti than ADL. The use of Ti is preferred as a marker in digestibility trials in equines fed haylage-based diets supplemented with increasing amounts of pelleted concentrate.

  9. Measurement of Vehicle Air Conditioning Pull-Down Period

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, John F [ORNL; Huff, Shean P [ORNL; Moore, Larry G [ORNL; West, Brian H [ORNL

    2016-08-01

    Air conditioner usage was characterized for high heat-load summer conditions during short driving trips using a 2009 Ford Explorer and a 2009 Toyota Corolla. Vehicles were parked in the sun with windows closed to allow the cabin to become hot. Experiments were conducted by entering the instrumented vehicles in this heated condition and driving on-road with the windows up and the air conditioning set to maximum cooling, maximum fan speed and the air flow setting to recirculate cabin air rather than pull in outside humid air. The main purpose was to determine the length of time the air conditioner system would remain at or very near maximum cooling power under these severe-duty conditions. Because of the variable and somewhat uncontrolled nature of the experiments, they serve only to show that for short vehicle trips, air conditioning can remain near or at full cooling capacity for 10-minutes or significantly longer and the cabin may be uncomfortably warm during much of this time.

  10. Measuring Effectiveness of Persuasive Games Using an Informative Control Condition

    Directory of Open Access Journals (Sweden)

    Mara Soekarjo

    2015-06-01

    Full Text Available Research about the effectiveness of persuasive games is still emerging. This article presents a literature review of studies that empirically evaluate the effectiveness of persuasive games. The review concluded that limited empirical evidence is currently available to prove their effectiveness in attitude change. It further revealed that almost no study employed an informative control condition, making it difficult to conclude that the game was more effective than a control condition. Next, in a pretest-posttest design an empirical study tested whether change in attitude was different for people playing the persuasive game "EnerCities" compared to a control condition where participants read a document with highly similar information. No significant differences in increase of attitude or knowledge between participants that played the game and participants in the informative control condition were found. Based on the results of the literature review and the empirical study presented, it hence cannot be concluded that playing a game leads to a greater change in attitude or knowledge acquisition than experiencing conventional media would. Future work should employ designs with proper control conditions and focus on which game features lead to significant effects.

  11. Research on sorption behavior of radionuclides under shallow land environment. Mechanism and standard methodologies for measurement of distribution coefficients of radionuclides

    International Nuclear Information System (INIS)

    Sakamoto, Yoshiaki; Tanaka, Tadao; Takebe, Shinichi; Nagao, Seiya; Ogawa, Hiromichi; Komiya, Tomokazu; Hagiwara, Shigeru

    2001-01-01

    This study consists of two categories' research works. One is research on sorption mechanism of radionuclides with long half-life, which are Technetium-99, TRU elements and U series radionuclides, on soil and rocks, including a development of database of distribution coefficients of radionuclides. The database on the distribution coefficients of radionuclides with information about measurement conditions, such as shaking method, soil characteristics and solution composition, has been already opened to the public (JAERI-DATABASE 20001003). Another study is investigation on a standard methodology of the distribution coefficient of radionuclide on soils, rocks and engineering materials in Japan. (author)

  12. The salt tolerance of Quinoa measured under field conditions

    DEFF Research Database (Denmark)

    Razzaghi, Fatemeh; Ahmadi, Seyed Hamid; Jensen, Christian Richardt

    conditions. In this study the threshold electrical conductivity of soil saturation extract (ECe) and maximum ECe corresponding to no economic yield of quinoa (cv. Titicaca) were determined. The experimental factors were five levels of saline solution (0, 10, 20, 30 and 40 dS m-1) imposed during flowering...

  13. Thermal decomposition of hydroxylamine: isoperibolic calorimetric measurements at different conditions.

    Science.gov (United States)

    Adamopoulou, Theodora; Papadaki, Maria I; Kounalakis, Manolis; Vazquez-Carreto, Victor; Pineda-Solano, Alba; Wang, Qingsheng; Mannan, M Sam

    2013-06-15

    Thermal decomposition of hydroxylamine, NH2OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130-150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30-80 ml solutions containing 1.4-20 g of pure hydroxylamine (2.8-40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3-5 kJ g(-1). The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Improving process methodology for measuring plutonium burden in human urine using fission track analysis

    International Nuclear Information System (INIS)

    Krahenbuhl, M.P.; Slaughter, D.M.

    1998-01-01

    The aim of this paper is to clearly define the chemical and nuclear principles governing Fission Track Analysis (FTA) to determine environmental levels of 239 Pu in urine. The paper also addresses deficiencies in FTA methodology and introduces improvements to make FTA a more reliable research tool. Our refined methodology, described herein, includes a chemically-induced precipitation phase, followed by anion exchange chromatography and employs a chemical tracer, 236 Pu. We have been able to establish an inverse correlation between Pu recovery and sample volume and our data confirms that increases in sample volume do not result in higher accuracy or lower detection limits. We conclude that in subsequent studies, samples should be limited to approximately two liters. The Pu detection limit for a sample of this volume is 2.8 μBq/l. (author)

  15. Optimization of the Extraction Conditions for Buddleja officinalis Maxim. Using Response Surface Methodology and Exploration of the Optimum Harvest Time

    Directory of Open Access Journals (Sweden)

    Guoyong Xie

    2017-11-01

    Full Text Available The Box-Behnken design was used to evaluate the effects of the methanol concentration (60–100%, liquid to solid ratio (20:1 to 40:1 mL/g and extraction time (20–40 min on the yield of 11 constituents from Buddleja officinalis Maxim using ultrasound-assisted extraction. The Derringer’s desirability function approach showed that the modified optimum extraction conditions were: 76% methanol concentration, 33 min extraction time and a 34:1 mL/g solvent to solid ratio. Under these conditions, the experimentally measured yields of the compounds were in good agreement with the predicted values. An accurate and sensitive method was also established using high-performance liquid chromatography with diode-array detection for the simultaneous determination of the 11 compounds in Buddleja officinalis. The newly developed method was used to determine the amounts of bioactive components in Buddleja officinalis during four different growth stages. According to these results, we recommend that the full blossom stage is the best time for harvesting this plant to obtain the highest yield of crude materials.

  16. Optimization of the Extraction Conditions for Buddleja officinalis Maxim. Using Response Surface Methodology and Exploration of the Optimum Harvest Time.

    Science.gov (United States)

    Xie, Guoyong; Li, Ran; Han, Yu; Zhu, Yan; Wu, Gang; Qin, Minjian

    2017-11-01

    The Box-Behnken design was used to evaluate the effects of the methanol concentration (60-100%), liquid to solid ratio (20:1 to 40:1 mL/g) and extraction time (20-40 min) on the yield of 11 constituents from Buddleja officinalis Maxim using ultrasound-assisted extraction. The Derringer's desirability function approach showed that the modified optimum extraction conditions were: 76% methanol concentration, 33 min extraction time and a 34:1 mL/g solvent to solid ratio. Under these conditions, the experimentally measured yields of the compounds were in good agreement with the predicted values. An accurate and sensitive method was also established using high-performance liquid chromatography with diode-array detection for the simultaneous determination of the 11 compounds in Buddleja officinalis . The newly developed method was used to determine the amounts of bioactive components in Buddleja officinalis during four different growth stages. According to these results, we recommend that the full blossom stage is the best time for harvesting this plant to obtain the highest yield of crude materials.

  17. Measuring ICT Use and Contributing Conditions in Primary Schools

    Science.gov (United States)

    Vanderlinde, Ruben; Aesaert, Koen; van Braak, Johan

    2015-01-01

    Information and communication technology (ICT) use became of major importance for primary schools across the world as ICT has the potential to foster teaching and learning processes. ICT use is therefore a central measurement concept (dependent variable) in many ICT integration studies. This data paper presents two datasets (2008 and 2011) that…

  18. Modeling real conditions of 'Ukrytie' object in 3D measurement

    International Nuclear Information System (INIS)

    Podbereznyj, S.S.

    2001-01-01

    The article covers a technology of creation on soft products basis for designing: AutoCad, and computer graphics and animation 3D Studio, 3DS MAX, of 3D model of geometrical parameters of current conditions of building structures, technological equipment, fuel-containing materials, concrete, water of ruined Unit 4, 'Ukryttia' object, of Chernobyl NPP. The model built using the above technology will be applied in the future as a basis when automating the design and computer modeling of processes at the 'Ukryttia' object

  19. Effect of measurement conditions on three-dimensional roughness values, and development of measurement standard

    International Nuclear Information System (INIS)

    Fabre, A; Brenier, B; Raynaud, S

    2011-01-01

    Friction or corrosion behaviour, fatigue lifetime for mechanical components are influenced by their boundary and subsurface properties. The surface integrity is studied on mechanical component in order to improve the service behaviour of them. Roughness is one of the main geometrical properties, which is to be qualified and quantified. Components can be obtained using a complex process: forming, machining and treatment can be combined to realize parts with complex shape. Then, three-dimensional roughness is needed to characterize these parts with complex shape and textured surface. With contact or non-contact measurements (contact stylus, confocal microprobe, interferometer), three-dimensional roughness is quantified using the calculation of pertinent parameters defined by the international standard PR EN ISO 25178-2:2008. An analysis will identify the influence of measurement conditions on three-dimensional parameters. The purpose of this study is to analyse the variation of roughness results using contact stylus or optical apparatus. The second aim of this work is to develop a measurement standard well adapted to qualify the contact and non-contact apparatus.

  20. A study of calculation methodology and experimental measurements of the kinetic parameters for source driven subcritical systems

    International Nuclear Information System (INIS)

    Lee, Seung Min

    2009-01-01

    This work presents a theoretical study of reactor kinetics focusing on the methodology of calculation and the experimental measurements of the so-called kinetic parameters. A comparison between the methodology based on the Dulla's formalism and the classical method is made. The objective is to exhibit the dependence of the parameters on subcriticality level and perturbation. Two different slab type systems were considered: thermal one and fast one, both with homogeneous media. One group diffusion model was used for the fast reactor, and for the thermal system, two groups diffusion model, considering, in both case, only one precursor's family. The solutions were obtained using the expansion method. Also, descriptions of the main experimental methods of measurements of the kinetic parameters are presented in order to put a question about the compatibility of these methods in subcritical region. (author)

  1. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Science.gov (United States)

    Phillips-Smith, Catherine; Jeong, Cheol-Heon; Healy, Robert M.; Dabek-Zlotorzynska, Ewa; Celo, Valbona; Brook, Jeffrey R.; Evans, Greg

    2017-08-01

    The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter) were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010-November 2012) at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013), hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow, water, and biota samples collected

  2. Biological Nitrogen Fixation Efficiency in Brazilian Common Bean Genotypes as Measured by {sup 15}N Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Franzini, V. I.; Mendes, F. L. [Brazilian Agricultural Research Corporation, EMBRAPA-Amazonia Oriental, Belem, PA (Brazil); Muraoka, T.; Trevisam, A. R. [Center for Nuclear Energy in Agriculture, University of Sao Paulo, Piracicaba, SP (Brazil); Adu-Gyamfi, J. J. [Soil and Water Management and Crop Nutrition Laboratory, International Atomic Energy Agency, Seibersdorf (Austria)

    2013-11-15

    Common bean (Phaseolus vulgaris L.) represents the main source of protein for the Brazilian and other Latin-American populations. Unlike soybean, which is very efficient in fixing atmospheric N{sub 2} symbiotically, common bean does not dispense with the need for N fertilizer application, as the biologically fixed N (BNF) seems incapable to supplement the total N required by the crop. A experiment under controlled conditions was conducted in Piracicaba, Brazil, to assess N{sub 2} fixation of 25 genotypes of common bean (Phaseolus vulgaris L.). BNF was measured by {sup 15}N isotope dilution using a non-N{sub 2} fixing bean genotype as a reference crop. The common bean genotypes were grown in low (2.2 mg N kg{sup -1} soil) or high N content soil (200 mg N kg{sup -1} soil), through N fertilizer application, as urea-{sup 15}N (31.20 and 1.4 atom % {sup 15}N, respectively). The bean seeds were inoculated with Rhizobium tropici CIAT 899 strain and the plants were harvested at grain maturity stage. The contribution of BNF was on average 75% of total plant N content, and there were differences in N fixing capacity among the bean genotypes. The most efficient genotypes were Horizonte, Roxo 90, Grafite, Apore and Vereda, when grown in high N soil. None of the genotypes grown in low N soil was efficient in producing grains compared to those grown in high N soil, and therefore the BNF was not able to supply the total N demand of the bean crop. (author)

  3. Atrazine distribution measured in soil and leachate following infiltration conditions.

    Science.gov (United States)

    Neurath, Susan K; Sadeghi, Ali M; Shirmohammadi, Adel; Isensee, Allan R; Torrents, Alba

    2004-01-01

    Atrazine transport through packed 10 cm soil columns representative of the 0-10 cm soil horizon was observed by measuring the atrazine recovery in the total leachate volume, and upper and lower soil layers following infiltration of 7.5 cm water using a mechanical vacuum extractor (MVE). Measured recoveries were analyzed to understand the influence of infiltration rate and delay time on atrazine transport and distribution in the column. Four time periods (0.28, 0.8, 1.8, and 5.5 h) representing very high to moderate infiltration rates (26.8, 9.4, 4.2, and 1.4 cm/h) were used. Replicate soil columns were tested immediately and following a 2-d delay after atrazine application. Results indicate atrazine recovery in leachate was independent of infiltration rate, but significantly lower for infiltration following a 2-d delay. Atrazine distribution in the 0-1 and 9-10 cm soil layers was affected by both infiltration rate and delay. These results are in contrast with previous field and laboratory studies that suggest that atrazine recovery in the leachate increases with increasing infiltration rate. It appears that the difference in atrazine recovery measured using the MVE and other leaching experiments using intact soil cores from this field site and the rain simulation equipment probably illustrates the effect of infiltrating water interacting with the atrazine present on the soil surface. This work suggests that atrazine mobilization from the soil surface is also dependent on interactions of the infiltrating water with the soil surface, in addition to the rate of infiltration through the surface soil.

  4. Continuous measurements of soil radon under regular field conditions

    International Nuclear Information System (INIS)

    Font, LL

    1999-01-01

    Continuous soil radon measurements were performed in the frame of an European Community-radon network using the Clipperton II detector. It has been found that in some periods, soil radon levels obtained with one Clipperton II probe are very different from those obtained with another probe placed at the same depth but a short distance apart. It has been also found that the response of the probes to a sudden change of radon concentration is controlled by the diffusion process along the bottom tube of the probe. Therefore, this study shows that the experimental data can be attributed to the natural behaviour of soil radon

  5. A New Perspective on Binaural Integration Using Response Time Methodology: Super Capacity Revealed in Conditions of Binaural Masking Release

    Directory of Open Access Journals (Sweden)

    Jennifer eLentz

    2014-08-01

    Full Text Available This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180o out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic versus the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, reaction times allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear versus two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1, consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency – most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.

  6. A new perspective on binaural integration using response time methodology: super capacity revealed in conditions of binaural masking release.

    Science.gov (United States)

    Lentz, Jennifer J; He, Yuan; Townsend, James T

    2014-01-01

    This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)-most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.

  7. Setting the light conditions for measuring root transparency for age-at-death estimation methods.

    Science.gov (United States)

    Adserias-Garriga, Joe; Nogué-Navarro, Laia; Zapico, Sara C; Ubelaker, Douglas H

    2018-03-01

    Age-at-death estimation is one of the main goals in forensic identification, being an essential parameter to determine the biological profile, narrowing the possibility of identification in cases involving missing persons and unidentified bodies. The study of dental tissues has been long considered as a proper tool for age estimation with several age estimation methods based on them. Dental age estimation methods can be divided into three categories: tooth formation and development, post-formation changes, and histological changes. While tooth formation and growth changes are important for fetal and infant consideration, when the end of dental and skeletal growth is achieved, post-formation or biochemical changes can be applied. Lamendin et al. in J Forensic Sci 37:1373-1379, (1992) developed an adult age estimation method based on root transparency and periodontal recession. The regression formula demonstrated its accuracy of use for 40 to 70-year-old individuals. Later on, Prince and Ubelaker in J Forensic Sci 47(1):107-116, (2002) evaluated the effects of ancestry and sex and incorporated root height into the equation, developing four new regression formulas for males and females of African and European ancestry. Even though root transparency is a key element in the method, the conditions for measuring this element have not been established. The aim of the present study is to set the light conditions measured in lumens that offer greater accuracy when applying the Lamendin et al. method modified by Prince and Ubelaker. The results must be also taken into account in the application of other age estimation methodologies using root transparency to estimate age-at-death.

  8. Extreme Sea Conditions in Shallow Water: Estimation based on in-situ measurements

    Science.gov (United States)

    Le Crom, Izan; Saulnier, Jean-Baptiste

    2013-04-01

    The design of marine renewable energy devices and components is based, among others, on the assessment of the environmental extreme conditions (winds, currents, waves, and water level) that must be combined together in order to evaluate the maximal loads on a floating/fixed structure, and on the anchoring system over a determined return period. Measuring devices are generally deployed at sea over relatively short durations (a few months to a few years), typically when describing water free surface elevation, and extrapolation methods based on hindcast data (and therefore on wave simulation models) have to be used. How to combine, in a realistic way, the action of the different loads (winds and waves for instance) and which correlation of return periods should be used are highly topical issues. However, the assessment of the extreme condition itself remains a not-fully-solved, crucial, and sensitive task. Above all in shallow water, extreme wave height, Hmax, is the most significant contribution in the dimensioning process of EMR devices. As a case study, existing methodologies for deep water have been applied to SEMREV, the French marine energy test site. The interest of this study, especially at this location, goes beyond the simple application to SEMREV's WEC and floating wind turbines deployment as it could also be extended to the Banc de Guérande offshore wind farm that are planned close by. More generally to pipes and communication cables as it is a redundant problematic. The paper will first present the existing measurements (wave and wind on site), the prediction chain that has been developed via wave models, the extrapolation methods applied to hindcast data, and will try to formulate recommendations for improving this assessment in shallow water.

  9. Methodology of the Auditing Measures to Civil Airport Security and Protection

    Directory of Open Access Journals (Sweden)

    Ján Kolesár

    2016-10-01

    Full Text Available Airports similarly to other companies are certified in compliance with the International Standardization Organization (ISO standards of products and services (series of ISO 9000 Standards regarding quality management, to coordinate the technical side of standardizatioon and normalization at an international scale. In order for the airports to meet the norms and the certification requirements as by the ISO they are liable to undergo strict audits of quality, as a rule, conducted by an independent auditing organization. Focus of the audits is primarily on airport operation economics and security. The article is an analysis into the methodology of the airport security audit processes and activities. Within the framework of planning, the sequence of steps is described in line with the principles and procedures of the Security Management System (SMS and starndards established by the International Standardization Organization (ISO. The methodology of conducting airport security audit is developed in compliance with the national programme and international legislation standards (Annex 17 applicable to protection of civil aviation against acts of unlawful interference.

  10. Measurement and analysis of active synchrotron mirrors under operating conditions

    Science.gov (United States)

    Sutter, John P.; Alcock, Simon G.; Sawhney, Kawal

    2013-05-01

    At the Diamond Light Source, in situ slope error measurements using the pencil-beam method have enabled X-ray mirror surfaces to be examined in their beamline environment. A surface corrugation common to several bimorph mirrors and the removal of that corrugation by repolishing were both confirmed using this method. In the same way, mirrors curved in a controlled way with bending actuators and sag compensators could also be optimized. Fits to the elastic bending of ideal beams using the Euler-Bernoulli model have been performed on the slope errors of a mechanically bent mirror in order to distinguish bender curvatures from gravitational distortion and to calculate the compensating force that most reduces the latter effect. A successful improvement of the sag compensation mechanism of a vertically focusing mirror was also achieved, aided by a previously tested method for optimizing the settings of a mirror's actuators using pencil-beam scans.

  11. Measurement and analysis of active synchrotron mirrors under operating conditions

    International Nuclear Information System (INIS)

    Sutter, John P.; Alcock, Simon G.; Sawhney, Kawal

    2013-01-01

    At the Diamond Light Source, in situ slope error measurements using the pencil-beam method have enabled X-ray mirror surfaces to be examined in their beamline environment. A surface corrugation common to several bimorph mirrors and the removal of that corrugation by repolishing were both confirmed using this method. In the same way, mirrors curved in a controlled way with bending actuators and sag compensators could also be optimized. Fits to the elastic bending of ideal beams using the Euler–Bernoulli model have been performed on the slope errors of a mechanically bent mirror in order to distinguish bender curvatures from gravitational distortion and to calculate the compensating force that most reduces the latter effect. A successful improvement of the sag compensation mechanism of a vertically focusing mirror was also achieved, aided by a previously tested method for optimizing the settings of a mirror's actuators using pencil-beam scans

  12. Statistical modeling of optical attenuation measurements in continental fog conditions

    Science.gov (United States)

    Khan, Muhammad Saeed; Amin, Muhammad; Awan, Muhammad Saleem; Minhas, Abid Ali; Saleem, Jawad; Khan, Rahimdad

    2017-03-01

    Free-space optics is an innovative technology that uses atmosphere as a propagation medium to provide higher data rates. These links are heavily affected by atmospheric channel mainly because of fog and clouds that act to scatter and even block the modulated beam of light from reaching the receiver end, hence imposing severe attenuation. A comprehensive statistical study of the fog effects and deep physical understanding of the fog phenomena are very important for suggesting improvements (reliability and efficiency) in such communication systems. In this regard, 6-months real-time measured fog attenuation data are considered and statistically investigated. A detailed statistical analysis related to each fog event for that period is presented; the best probability density functions are selected on the basis of Akaike information criterion, while the estimates of unknown parameters are computed by maximum likelihood estimation technique. The results show that most fog attenuation events follow normal mixture distribution and some follow the Weibull distribution.

  13. Ethical and methodological issues in qualitative studies involving people with severe and persistent mental illness such as schizophrenia and other psychotic conditions: a critical review.

    Science.gov (United States)

    Carlsson, Ing-Marie; Blomqvist, Marjut; Jormfeldt, Henrika

    2017-01-01

    Undertaking research studies in the field of mental health is essential in mental health nursing. Qualitative research methodologies enable human experiences to become visible and recognize the importance of lived experiences. This paper argues that involving people with schizophrenia in research is critical to promote their health and well-being. The quality of qualitative research needs scrutinizing according to methodological issues such as trustworthiness and ethical standards that are a fundamental part of qualitative research and nursing curricula. The aim of this study was to critically review recent qualitative studies involving people with severe and persistent mental illness such as schizophrenia and other psychotic conditions, regarding descriptions of ethical and methodological issues in data collection and analysis. A search for relevant papers was conducted in three electronic databases, in December 2016. Fifteen qualitative interview studies were included and reviewed regarding methodological issues related to ethics, and data collection and analysis. The results revealed insufficient descriptions of methodology regarding ethical considerations and issues related to recruitment and sampling in qualitative interview studies with individuals with severe mental illness, putting trustworthiness at risk despite detailed descriptions of data analysis. Knowledge from the perspective of individuals with their own experience of mental illness is essential. Issues regarding sampling and trustworthiness in qualitative studies involving people with severe mental illness are vital to counteract the stigmatization of mental illness.

  14. Upgraded Fast Beam Conditions Monitor for CMS online luminosity measurement

    CERN Document Server

    Leonard, Jessica Lynn

    2014-01-01

    The CMS beam and radiation monitoring subsystem BCM1F during LHC Run I consisted of 8 individual diamond sensors situated around the beam pipe within the tracker detector volume, for the purpose of fast monitoring of beam background and collision products. Effort is ongoing to develop the use of BCM1F as an online bunch-by-bunch luminosity monitor. BCM1F will be running whenever there is beam in LHC, and its data acquisition is independent from the data acquisition of the CMS detector, hence it delivers luminosity even when CMS is not taking data. To prepare for the expected increase in the LHC luminosity and the change from 50 ns to 25 ns bunch separation, several changes to the system are required, including a higher number of sensors and upgraded electronics. In particular, a new real-time digitizer with large memory was developed and is being integrated into a multi-subsystem framework for luminosity measurement. Current results from Run II preparation will be shown, including results from the January 201...

  15. Evaluation of constraint methodologies applied to a shallow-flaw cruciform bend specimen tested under biaxial loading conditions

    International Nuclear Information System (INIS)

    Bass, B.R.; McAfee, W.J.; Williams, P.T.; Pennell, W.E.

    1998-01-01

    A technology to determine shallow-flaw fracture toughness of reactor pressure vessel (RPV) steels is being developed for application to the safety assessment of RPVs containing postulated shallow surface flaws. Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shallow surface flaws. The cruciform beam specimens were developed at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far-field. out-of-plane biaxial stress component in the test section that approximates the nonlinear stresses resulting from pressurized-thermal-shock or pressure-temperature loading of an RPV. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. The cruciform fracture toughness data were used to evaluate fracture methodologies for predicting the observed effects of biaxial loading on shallow-flaw fracture toughness. Initial emphasis was placed on assessment of stress-based methodologies. namely, the J-Q formulation, the Dodds-Anderson toughness scaling model, and the Weibull approach. Applications of these methodologies based on the hydrostatic stress fracture criterion indicated an effect of loading-biaxiality on fracture toughness, the conventional maximum principal stress criterion indicated no effect

  16. Skin condition measurement by using multispectral imaging system (Conference Presentation)

    Science.gov (United States)

    Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan

    2017-02-01

    There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.

  17. Methodological challenges surrounding direct-to-consumer advertising research--the measurement conundrum.

    Science.gov (United States)

    Hansen, Richard A; Droege, Marcus

    2005-06-01

    Numerous studies have focused on the impact of direct-to-consumer (DTC) prescription drug advertising on consumer behavior and health outcomes. These studies have used various approaches to assess exposure to prescription drug advertising and to measure the subsequent effects of such advertisements. The objectives of this article are to (1) discuss measurement challenges involved in DTC advertising research, (2) summarize measurement approaches commonly identified in the literature, and (3) discuss contamination, time to action, and endogeneity as specific problems in measurement design and application. We conducted a review of the professional literature to identify illustrative approaches to advertising measurement. Specifically, our review of the literature focused on measurement of DTC advertising exposure and effect. We used the hierarchy-of-effects model to guide our discussion of processing and communication effects. Other effects were characterized as target audience action, sales, market share, and profit. Overall, existing studies have used a variety of approaches to measure advertising exposure and effect, yet the ability of measures to produce a valid and reliable understanding of the effects of DTC advertising can be improved. Our review provides a framework for conceptualizing DTC measurement, and can be used to identify gaps in the literature not sufficiently addressed by existing measures. Researchers should continue to explore correlations between exposure and effect of DTC advertising, but are obliged to improve and validate measurement in this area.

  18. Methodological considerations for researchers and practitioners using pedometers to measure physical (ambulatory) activity.

    Science.gov (United States)

    Tudor-Locke, C E; Myers, A M

    2001-03-01

    Researchers and practitioners require guidelines for using electronic pedometers to objectively quantify physical activity (specifically ambulatory activity) for research and surveillance as well as clinical and program applications. Methodological considerations include choice of metric and length of monitoring frame as well as different data recording and collection procedures. A systematic review of 32 empirical studies suggests we can expect 12,000-16,000 steps/day for 8-10-year-old children (lower for girls than boys); 7,000-13,000 steps/day for relatively healthy, younger adults (lower for women than men); 6,000-8,500 steps/day for healthy older adults; and 3,500-5,500 steps/day for individuals living with disabilities and chronic illnesses. These preliminary recommendations should be modified and refined, as evidence and experience using pedometers accumulates.

  19. A Novel Methodology for Measurements of an LED's Heat Dissipation Factor

    Science.gov (United States)

    Jou, R.-Y.; Haung, J.-H.

    2015-12-01

    Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.

  20. A Case Study of Six Sigma Define-Measure-Analyze-Improve-Control (DMAIC Methodology in Garment Sector

    Directory of Open Access Journals (Sweden)

    Abdur Rahman

    2017-12-01

    Full Text Available This paper demonstrates the empirical application of Six Sigma and Define-Measure-Analyze-Improve-Control (DMAIC methodology to reduce product defects within a garments manufacturing organization in Bangladesh which follows the DMAIC methodology to investigate defects, root causes and provide a solution to eliminate these defects. The analysis from employing Six Sigma and DMAIC indicated that the broken stitch and open seam influenced the number of defective products. Design of experiments (DOE and the analysis of variance (ANOVA techniques were combined to statistically determine the correlation of the broken stitch and open seam with defects as well as to define their optimum values needed to eliminate the defects. Thus, a reduction of about 35% in the garments defect was achieved, which helped the organization studied to reduce its defects and thus improve its Sigma level from 1.7 to 3.4.

  1. Assertion based verification methodology for HDL designs of primary sodium pump speed and eddy current flow measurement systems of PFBR

    International Nuclear Information System (INIS)

    Misra, M.K.; Menon, Saritha P.; Thirugnana Murthy, D.

    2013-01-01

    With the growing complexity and size of digital designs, functional verification has become a huge challenge. The validation and testing process accounts for a significant percentage of the overall development effort and cost for electronic systems. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected upfront, they can result in severe consequences - both financially and from a safety viewpoint. This paper covers the various types of verification methodologies and focuses on Assertion Based Verification Methodology for HDL designs, taking as case studies, the Primary Sodium Pump Speed and Eddy Current Flow Measurement Systems of PFBR. (author)

  2. A combination of body condition measurements is more informative than conventional condition indices: temporal variation in body condition and corticosterone in brown tree snakes (Boiga irregularis).

    Science.gov (United States)

    Waye, Heather L; Mason, Robert T

    2008-02-01

    The body condition index is a common method for quantifying the energy reserves of individual animals. Because good body condition is necessary for reproduction in many species, body condition indices can indicate the potential reproductive output of a population. Body condition is related to glucocorticoid production, in that low body condition is correlated to high concentrations of corticosterone in reptiles. We compared the body condition index and plasma corticosterone levels of brown tree snakes on Guam in 2003 to those collected in 1992/1993 to determine whether that population still showed the chronic stress and poor condition apparent in the earlier study. We also examined the relationship between fat mass, body condition and plasma corticosterone concentrations as indicators of physiological condition of individuals in the population. Body condition was significantly higher in 2003 than in the earlier sample for mature male and female snakes, but not for juveniles. The significantly lower levels of corticosterone in all three groups in 2003 suggests that although juveniles did not have significantly improved energy stores they, along with the mature males and females, were no longer under chronic levels of stress. Although the wet season of 2002 was unusually rainy, low baseline levels of corticosterone measured in 2000 indicate that the improved body condition of snakes in 2003 is likely the result of long-term changes in prey populations rather than annual variation in response to environmental conditions.

  3. Measurements of mixtures with carbon dioxide under supercritical conditions using commercial high pressure equipment

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Luciana L.P.R. de; Rutledge, Luis Augusto Medeiros; Moreno, Eesteban L.; Hovell, Ian; Rajagopal, Krishnaswamy [Universidade Federal do Rio de Janeiro (LATCA-EQ-UFRJ), RJ (Brazil). Escola de Quimica. Lab. de Termodinamica e Cinetica Aplicada

    2012-07-01

    There is a growing interest in studying physical properties of binary and multicomponent fluid mixtures with supercritical carbon dioxide (CO{sub 2}) over an extended range of temperature and pressure. The estimation of properties such as density, viscosity, saturation pressure, compressibility, solubility and surface tension of mixtures is important in design, operation and control as well as optimization of chemical processes especially in extractions, separations, catalytic and enzymatic reactions. The phase behaviour of binary and multicomponent mixtures with supercritical CO{sub 2} is also important in the production and refining of petroleum where mixtures of paraffin, naphthene and aromatics with supercritical fluids are often encountered. Petroleum fluids can present a complex phase behaviour in the presence of CO{sub 2}, where two-phase (VLE and LLE) and three phase regions (VLLE) might occur within ranges of supercritical conditions of temperature and pressure. The objective of this study is to develop an experimental methodology for measuring the phase behaviour of mixtures containing CO{sub 2} in supercritical regions, using commercial high-pressure equipment. (author)

  4. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Parra, Jorge O.; Hackert, Chris L.; Collier, Hughbert A.; Bennett, Michael

    2002-01-29

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate NMR techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This is accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging are being linked with a balanced petrographical analysis of the core and theoretical model.

  5. Development of a methodology for conducting an integrated HRA/PRA --. Task 1, An assessment of human reliability influences during LP&S conditions PWRs

    Energy Technology Data Exchange (ETDEWEB)

    Luckas, W.J.; Barriere, M.T.; Brown, W.S. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., McLean, VA (United States)

    1993-06-01

    During Low Power and Shutdown (LP&S) conditions in a nuclear power plant (i.e., when the reactor is subcritical or at less than 10--15% power), human interactions with the plant`s systems will be more frequent and more direct. Control is typically not mediated by automation, and there are fewer protective systems available. Therefore, an assessment of LP&S related risk should include a greater emphasis on human reliability than such an assessment made for power operation conditions. In order to properly account for the increase in human interaction and thus be able to perform a probabilistic risk assessment (PRA) applicable to operations during LP&S, it is important that a comprehensive human reliability assessment (HRA) methodology be developed and integrated into the LP&S PRA. The tasks comprising the comprehensive HRA methodology development are as follows: (1) identification of the human reliability related influences and associated human actions during LP&S, (2) identification of potentially important LP&S related human actions and appropriate HRA framework and quantification methods, and (3) incorporation and coordination of methodology development with other integrated PRA/HRA efforts. This paper describes the first task, i.e., the assessment of human reliability influences and any associated human actions during LP&S conditions for a pressurized water reactor (PWR).

  6. Measuring subjective meaning structures by the laddering method: Theoretical considerations and methodological problems

    DEFF Research Database (Denmark)

    Grunert, Klaus G.; Grunert, Suzanne C.

    1995-01-01

    Starting from a general model of measuring cognitive structures for predicting consumer behaviour, we discuss laddering as a possible method to obtain estimates of consumption-relevant cognitive structures which will have predictive validity. Four criteria for valid measurement are derived and ap...

  7. A hybrid measure-correlate-predict method for long-term wind condition assessment

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias

    2014-01-01

    Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze

  8. Translating patient reported outcome measures: methodological issues explored using cognitive interviewing with three rheumatoid arthritis measures in six European languages

    NARCIS (Netherlands)

    Hewlett, Sarah E.; Nicklin, Joanna; Bode, Christina; Carmona, Loretto; Dures, Emma; Engelbrecht, Matthias; Hagel, Sofia; Kirwan, John R.; Molto, Anna; Redondo, Marta; Gossec, Laure

    2016-01-01

    Objective. Cross-cultural translation of patient-reported outcome measures (PROMs) is a lengthy process, often performed professionally. Cognitive interviewing assesses patient comprehension of PROMs. The objective was to evaluate the usefulness of cognitive interviewing to assess translations and

  9. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    Science.gov (United States)

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-04

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services. © The American Society of Tropical Medicine and Hygiene.

  10. A Chinese View on the Cultural Conditionality of Logic and Epistemology: Zhang Dongsun’s Intercultural Methodology

    Directory of Open Access Journals (Sweden)

    Jana Rošker

    2010-12-01

    Full Text Available Recognizing the fact that comprehension, analysis and transmission of reality are based on diversely structured socio-political contexts as well as on different categorical and essential postulates, offers a prospect of enrichment. Thus, this article presents an analysis and interpretation of one of the first Chinese theoreticians, working in the field of intercultural methodology. Although Zhang Dongsun (1886–1973 can be considered as one of the leading Chinese philosophers of the 20th Century, his criticism of Sinicized Marxist ideologies marked him as a political dissident and he was consequently consigned to oblivion for several decades; only recently has his work been rediscovered by a number of younger Chinese theorists, who have shown a growing interest in his ideas. Although he is still relatively unknown in the West, Zhang definitely deserves to be recognized for his contributions to Chinese and comparative philosophy. The present article focuses on his extraordinary ability to introduce Western thought in a way which was compatible with the specific methodology of traditional Chinese thought. According to such presumptions, culture is viewed as an entity composed of a number of specific discourses and relations. The article shows how the interweaving and interdependence of these discourses form different cultural backgrounds, which manifest themselves in the specific, culturally determined structures of language and logic. It also explains the role of traditional elements in his cultural epistemology.

  11. Measurement of heat stress conditions at cow level and comparison to climate conditions at stationary locations inside a dairy barn.

    Science.gov (United States)

    Schüller, Laura K; Heuwieser, Wolfgang

    2016-08-01

    The objectives of this study were to examine heat stress conditions at cow level and to investigate the relationship to the climate conditions at 5 different stationary locations inside a dairy barn. In addition, we compared the climate conditions at cow level between primiparous and multiparous cows for a period of 1 week after regrouping. The temperature-humidity index (THI) differed significantly between all stationary loggers. The lowest THI was measured at the window logger in the experimental stall and the highest THI was measured at the central logger in the experimental stall. The THI at the mobile cow loggers was 2·33 THI points higher than at the stationary loggers. Furthermore, the mean daily THI was higher at the mobile cow loggers than at the stationary loggers on all experimental days. The THI in the experimental pen was 0·44 THI points lower when the experimental cow group was located inside the milking parlour. The THI measured at the mobile cow loggers was 1·63 THI points higher when the experimental cow group was located inside the milking parlour. However, there was no significant difference for all climate variables between primiparous and multiparous cows. These results indicate, there is a wide range of climate conditions inside a dairy barn and especially areas with a great distance to a fresh air supply have an increased risk for the occurrence of heat stress conditions. Furthermore, the heat stress conditions are even higher at cow level and cows not only influence their climatic environment, but also generate microclimates within different locations inside the barn. Therefore climate conditions should be obtained at cow level to evaluate the heat stress conditions that dairy cows are actually exposed to.

  12. Methodological challenges in measurements of functional ability in gerontological research. A review

    DEFF Research Database (Denmark)

    Avlund, Kirsten

    1997-01-01

    This article addresses two important challenges in the measurement of functional ability in gerontological research: the first challenge is to connect measurements to a theoretical frame of reference which enhances our understanding and interpretation of the collected data; the second relates...... procedure, validity, discriminatory power, and responsiveness. In measures of functional ability it is recommended: 1) always to consider the theoretical frame of reference as part of the validation process (e.g., the theory of "The Disablement Process"; 2) always to assess whether the included activities...

  13. From theory to 'measurement' in complex interventions: methodological lessons from the development of an e-health normalisation instrument.

    Science.gov (United States)

    Finch, Tracy L; Mair, Frances S; O'Donnell, Catherine; Murray, Elizabeth; May, Carl R

    2012-05-17

    Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field. A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals. The developed instrument was pre-tested in two professional samples (N=46; N=231). Ratings of items representing normalisation 'processes' were significantly related to staff members' perceptions of whether or not e-health had become 'routine'. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of

  14. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

    Directory of Open Access Journals (Sweden)

    Finch Tracy L

    2012-05-01

    Full Text Available Abstract Background Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1 describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2 identify key issues and methodological challenges for advancing work in this field. Methods A 30-item instrument (Technology Adoption Readiness Scale (TARS for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT. NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice was used by health care professionals. Results The developed instrument was pre-tested in two professional samples (N = 46; N = 231. Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. Conclusions To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1 greater attention to underlying theoretical assumptions and extent of translation work required; (2 the need for appropriate but flexible approaches to outcomes

  15. The professional methodological teaching performance of the professor of Physical education. Set of parameters for its measurement

    Directory of Open Access Journals (Sweden)

    Orlando Pedro Suárez Pérez

    2017-07-01

    Full Text Available This work was developed due to the need to attend to the difficulties found in the Physical Education teachers of the municipality of San Juan and Martínez during the development of the teaching-learning process of Basketball, which threaten the quality of the classes, sports results and preparation of the School for life. The objective is to propose parameters that allow measuring the professional teaching methodological performance of these teachers. The customized behavior of the research made possible the diagnosis of the 26 professors taken as a sample, expressing the traits that distinguish their efficiency, determining their potentialities and deficiencies. During the research process, theoretical, empirical and statistical methods were used, which permitted to corroborate the real existence of the problem, as well as the evaluation of its impact, which revealed a positive transformation in pedagogical practice. The results provide a concrete and viable answer for the improvement of the evaluation of the teaching-methodological component of the Physical Education teacher, which constitutes an important material of guidance for methodologists and managers related to the instrumental cognitive, procedural and attitudinal performance , In order to conduct from the precedent knowledge, the new knowledge and lead to a formative process, with a contemporary vision, offering methodological resources to control the quality of Physical Education lessons.

  16. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Riley, Cameron; Hobbs, William

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  17. Accuracy requirements on operational measurements in nuclear power plants with regard to balance methodology

    International Nuclear Information System (INIS)

    Holecek, C.

    1986-01-01

    Accurate in-service measurement is necessary for power balancing of nuclear power plants, i.e., the determination of fuel consumption, electric power generation, heat delivery and the degree of fuel power utilization. The only possible method of determining the input of total consumed energy from the fuel is the balance of the primary coolant circuit. This is because for the purposes of power balancing it is not possible to measure the amount of power generated from nuclear fuel. Relations are presented for the calculation of basic indices of the power balance. It is stated that for the purposes of power balancing and analyses the precision of measuring instrument at the input and output of balancing circuits is of primary importance, followed by the precision of measuring instruments inside balancing circuits and meters of auxiliary parameters. (Z.M.). 7 refs., 1 tab

  18. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    Science.gov (United States)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  19. Comparison of efficiency of distance measurement methodologies in mango (Mangifera indica) progenies based on physicochemical descriptors.

    Science.gov (United States)

    Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X

    2012-03-14

    We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.

  20. Methodology and measurement of radiation interception by quantum sensor of the oil palm plantation

    Directory of Open Access Journals (Sweden)

    Johari Endan

    2005-09-01

    Full Text Available Interception of light by a canopy is a fundamental requirement for crop growth and is important for biomass production and plant growth modeling. Solar radiation is an important parameter for photosynthesis and evapotranspiration. These two phenomena are dependent not only on the intensity of radiation but also on the distribution of intercepted radiation within the canopy. In this study, two operational methods for estimating the amount of photosynthetically active radiation (PAR intercepted by a canopy of the oil palm are presented. LICOR radiation sensors, model LI-190SA and model LI-191SA were used for photosynthetically active radiation (PAR measurement above and below the canopy. We developed two methods, namely "Triangular" method and "Circular" method for PAR measurement. Results show that both methods were suitable for oil palm PAR measurement. The triangular method is recommended for PAR measurements with respect to the whole plantation and the circular method is recommended for specific purposes, such as growth analysis or growth modeling of the oil palm. However, practical considerations such as equipment availability, purpose of the measurement, age of the palm, and the number of measuring points to be sampled should be taken into account in the selection of a suitable method for a particular study. The results indicate that the interception of radiation was affected by spatial variation, and the radiation transmission decreased towards the frond tips.

  1. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Directory of Open Access Journals (Sweden)

    C. Phillips-Smith

    2017-08-01

    Full Text Available The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010–November 2012 at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013, hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow

  2. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  3. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  4. Metabolic tumour volumes measured at staging in lymphoma: methodological evaluation on phantom experiments and patients

    International Nuclear Information System (INIS)

    Meignan, Michel; Sasanelli, Myriam; Itti, Emmanuel; Casasnovas, Rene Olivier; Luminari, Stefano; Fioroni, Federica; Coriani, Chiara; Masset, Helene; Gobbi, Paolo G.; Merli, Francesco; Versari, Annibale

    2014-01-01

    The presence of a bulky tumour at staging on CT is an independent prognostic factor in malignant lymphomas. However, its prognostic value is limited in diffuse disease. Total metabolic tumour volume (TMTV) determined on 18 F-FDG PET/CT could give a better evaluation of the total tumour burden and may help patient stratification. Different methods of TMTV measurement established in phantoms simulating lymphoma tumours were investigated and validated in 40 patients with Hodgkin lymphoma and diffuse large B-cell lymphoma. Data were processed by two nuclear medicine physicians in Reggio Emilia and Creteil. Nineteen phantoms filled with 18 F-saline were scanned; these comprised spherical or irregular volumes from 0.5 to 650 cm 3 with tumour-to-background ratios from 1.65 to 40. Volumes were measured with different SUVmax thresholds. In patients, TMTV was measured on PET at staging by two methods: volumes of individual lesions were measured using a fixed 41 % SUVmax threshold (TMTV 41 ) and a variable visually adjusted SUVmax threshold (TMTV var ). In phantoms, the 41 % threshold gave the best concordance between measured and actual volumes. Interobserver agreement was almost perfect. In patients, the agreement between the reviewers for TMTV 41 measurement was substantial (ρ c = 0.986, CI 0.97 - 0.99) and the difference between the means was not significant (212 ± 218 cm 3 for Creteil vs. 206 ± 219 cm 3 for Reggio Emilia, P = 0.65). By contrast the agreement was poor for TMTV var . There was a significant direct correlation between TMTV 41 and normalized LDH (r = 0.652, CI 0.42 - 0.8, P 41 , but high TMTV 41 could be found in patients with stage 1/2 or nonbulky tumour. Measurement of baseline TMTV in lymphoma using a fixed 41% SUVmax threshold is reproducible and correlates with the other parameters for tumour mass evaluation. It should be evaluated in prospective studies. (orig.)

  5. A methodology for interpretation of overcoring stress measurements in anisotropic rock

    International Nuclear Information System (INIS)

    Hakala, M.; Sjoeberg, J.

    2006-11-01

    The in situ state of stress is an important parameter for the design of a repository for final disposal of spent nuclear fuel. This report presents work conducted to improve the quality of overcoring stress measurements, focused on the interpretation of overcoring rock stress measurements when accounting for possible anisotropic behavior of the rock. The work comprised: (i) development/upgrading of a computer code for calculating stresses from overcoring strains for anisotropic materials and for a general overcoring probe configuration (up to six strain rosettes with six gauges each), (ii) development of a computer code for determining elastic constants for transversely isotropic rocks from biaxial testing, and (iii) analysis of case studies of selected overcoring measurements in both isotropic and anisotropic rocks from the Posiva and SKB sites in Finland and Sweden, respectively. The work was principally limited to transversely isotropic materials, although the stress calculation code is applicable also to orthotropic materials. The developed computer codes have been geared to work primarily with the Borre and CSIRO HI three-dimensional overcoring measurement probes. Application of the codes to selected case studies, showed that the developed tools were practical and useful for interpreting overcoring stress measurements conducted in anisotropic rock. A quantitative assessment of the effects of anisotropy may thus be obtained, which provides increased reliability in the stress data. Potential gaps in existing data and/or understanding can also be identified. (orig.)

  6. Novel Methods for Optically Measuring Whitecaps Under Natural Wave Breaking Conditions in the Southern Ocean

    Science.gov (United States)

    Randolph, K. L.; Dierssen, H. M.; Cifuentes-Lorenzen, A.; Balch, W. M.; Monahan, E. C.; Zappa, C. J.; Drapeau, D.; Bowler, B.

    2016-02-01

    Breaking waves on the ocean surface mark areas of significant importance to air-sea flux estimates of gas, aerosols, and heat. Traditional methods of measuring whitecap coverage using digital photography can miss features that are small in size or do not show high enough contrast to the background. The geometry of the images collected captures the near surface, bright manifestations of the whitecap feature and miss a portion of the bubble plume that is responsible for the production of sea salt aerosols and the transfer of lower solubility gases. Here, a novel method for accurately measuring both the fractional coverage of whitecaps and the intensity and decay rate of whitecap events using above water radiometry is presented. The methodology was developed using data collected during the austral summer in the Atlantic sector of the Southern Ocean under a large range of wind (speeds of 1 to 15 m s-1) and wave (significant wave heights 2 to 8 m) conditions as part of the Southern Ocean Gas Exchange experiment. Whitecap metrics were retrieved by employing a magnitude threshold based on the interquartile range of the radiance or reflectance signal for a single channel (411 nm) after a baseline removal, determined using a moving minimum/maximum filter. Breaking intensity and decay rate metrics were produced from the integration of, and the exponential fit to, radiance or reflectance over the lifetime of the whitecap. When compared to fractional whitecap coverage measurements obtained from high resolution digital images, radiometric estimates were consistently higher because they capture more of the decaying bubble plume area that is difficult to detect with photography. Radiometrically-retrieved whitecap measurements are presented in the context of concurrently measured meteorological (e.g., wind speed) and oceanographic (e.g., wave) data. The optimal fit of the radiometrically estimated whitecap coverage to the instantaneous wind speed, determined using ordinary least

  7. Optimum extrusion-cooking conditions for improving physical properties of fish-cereal based snacks by response surface methodology.

    Science.gov (United States)

    Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G

    2014-09-01

    To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.

  8. Measurements of air kerma index in computed tomography: a comparison among methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, T. C.; Mourao, A. P.; Da Silva, T. A., E-mail: alonso@cdtn.br [Universidade Federal de Minas Gerais, Programa de Ciencia y Tecnicas Nucleares, Av. Pres. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    Computed tomography (CT) has become the most important and widely used technique for diagnosis purpose. As CT exams impart high doses to patients in comparison to other radiologist techniques, reliable dosimetry is required. Dosimetry in CT is done in terms of air kerma index in air or in a phantom measured by a pencil ionization chamber under a single X-ray tube rotation. In this work, a comparison among CT dosimetric quantities measured by an UNFORS pencil ionization chamber, MTS-N RADOS thermoluminescent dosimeters and GAFCHROMIC XR-CT radiochromic film was done. The three dosimetric systems were properly calibrated in X-ray reference radiations in a calibration laboratory. CT dosimetric quantities were measured in CT Bright Speed GE Medical Systems Inc., scanner in a PMMA trunk phantom and a comparison among the three dosimetric techniques was done. (Author)

  9. A new methodology for measuring time correlations and excite states of atoms and nuclei

    International Nuclear Information System (INIS)

    Cavalcante, M.A.

    1989-01-01

    A system for measuring time correlation of physical phenomena events in the range of 10 -7 to 10 5 sec is proposed, and his results presented. This system, is based on a sequential time scale which is controlled by a precision quartz oscillator; the zero time of observation is set by means of a JK Flip-Flop, which is operated by a negative transition of pulse in coincidence with the pulse from a detector which marks the time zero of the event (precedent pulse). This electronic system (named digital chronoanalizer) was used in the measurement of excited states of nuclei as well as for the determination of time fluctuations in physical phenomena, such as the time lag in a halogen Geiger counter and is the measurement of the 60 KeV excited state of N P 237 . (author)

  10. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    Energy Technology Data Exchange (ETDEWEB)

    Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U. [Institute of Nuclear Physics, Cracow (Poland)

    1993-12-31

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.

  11. Measurements of air kerma index in computed tomography: a comparison among methodologies

    International Nuclear Information System (INIS)

    Alonso, T. C.; Mourao, A. P.; Da Silva, T. A.

    2016-10-01

    Computed tomography (CT) has become the most important and widely used technique for diagnosis purpose. As CT exams impart high doses to patients in comparison to other radiologist techniques, reliable dosimetry is required. Dosimetry in CT is done in terms of air kerma index in air or in a phantom measured by a pencil ionization chamber under a single X-ray tube rotation. In this work, a comparison among CT dosimetric quantities measured by an UNFORS pencil ionization chamber, MTS-N RADOS thermoluminescent dosimeters and GAFCHROMIC XR-CT radiochromic film was done. The three dosimetric systems were properly calibrated in X-ray reference radiations in a calibration laboratory. CT dosimetric quantities were measured in CT Bright Speed GE Medical Systems Inc., scanner in a PMMA trunk phantom and a comparison among the three dosimetric techniques was done. (Author)

  12. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    International Nuclear Information System (INIS)

    Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U.

    1993-01-01

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a 3 He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs

  13. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    Energy Technology Data Exchange (ETDEWEB)

    Drozdowicz, K; Gabanska, B; Igielski, A; Krynicka, E; Woznicka, U [Institute of Nuclear Physics, Cracow (Poland)

    1994-12-31

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.

  14. Methodologies for the measurement of bone density and their precision and accuracy

    International Nuclear Information System (INIS)

    Goodwin, P.N.

    1987-01-01

    Radiographic methods of determining bone density have been available for many years, but recently most of the efforts in this field have focused on the development of instruments which would accurately and automatically measure bone density by absorption, or by the use of x-ray computed tomography (CT). Single energy absorptiometers using I-125 have been available for some years, and have been used primarily for measurements on the radius, although recently equipment for measuring the os calcis has become available. Accuracy of single energy measurements is about 3% to 5%; precision, which has been poor because of the difficulty of exact repositioning, has recently been improved by automatic methods so that it now approaches 1% or better. Dual energy sources offer the advantages of greater accuracy and the ability to measure the spine and other large bones. A number of dual energy scanners are now on the market, mostly using gadolinium-153 as a source. Dual energy scanning is capable of an accuracy of a few percent, but the precision when scanning patients can vary widely, due to the difficulty of comparing exactly the same areas; 2 to 4% would appear to be typical. Quantitative computed tomography (QCT) can be used to directly measure the trabecular bone within the vertebral body. The accuracy of single-energy QCT is affected by the amount of marrow fat present, which can lead to underestimations of 10% or more. An increase in marrow fat would cause an apparent decrease in bone mineral. However, the precision can be quite good, 1% or 2% on phantoms, and nearly as good on patients when four vertebrae are averaged. Dual energy scanning can correct for the presence of fat, but is less precise, and not available on all CT units. 52 references

  15. Methodological aspects to be considered in evaluating the economics of service measures

    International Nuclear Information System (INIS)

    Bald, M.

    1987-01-01

    For the purposes of the report, service measures is used as a term denoting all those steps which exceed the framework of normal in-service maintenance and repair and serve to improve economics over the normal case. Positive impacts are to be achieved on such parameters as availability, efficiency, and service life. One of the aspects investigated is the effect, if any, of such measures on the residual service life of plants in operation for a long period of time already. Residual service life in this case means the remaining span of effective technical and economic operation which, in these model calculations, also includes part of the period of depreciation. (orig.) [de

  16. High frequency measurement of P- and S-wave velocities on crystalline rock massif surface - methodology of measurement

    Science.gov (United States)

    Vilhelm, Jan; Slavík, Lubomír

    2014-05-01

    For the purpose of non-destructive monitoring of rock properties in the underground excavation it is possible to perform repeated high-accuracy P- and S-wave velocity measurements. This contribution deals with preliminary results gained during the preparation of micro-seismic long-term monitoring system. The field velocity measurements were made by pulse-transmission technique directly on the rock outcrop (granite) in Bedrichov gallery (northern Bohemia). The gallery at the experimental site was excavated using TBM (Tunnel Boring Machine) and it is used for drinking water supply, which is conveyed in a pipe. The stable measuring system and its automatic operation lead to the use of piezoceramic transducers both as a seismic source and as a receiver. The length of measuring base at gallery wall was from 0.5 to 3 meters. Different transducer coupling possibilities were tested namely with regard of repeatability of velocity determination. The arrangement of measuring system on the surface of the rock massif causes better sensitivity of S-transducers for P-wave measurement compared with the P-transducers. Similarly P-transducers were found more suitable for S-wave velocity determination then P-transducers. The frequency dependent attenuation of fresh rock massif results in limited frequency content of registered seismic signals. It was found that at the distance between the seismic source and receiver from 0.5 m the frequency components above 40 kHz are significantly attenuated. Therefore for the excitation of seismic wave 100 kHz transducers are most suitable. The limited frequency range should be also taken into account for the shape of electric impulse used for exciting of piezoceramic transducer. The spike pulse generates broad-band seismic signal, short in the time domain. However its energy after low-pass filtration in the rock is significantly lower than the energy of seismic signal generated by square wave pulse. Acknowledgments: This work was partially

  17. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    Science.gov (United States)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  18. METHODOLOGICAL APPROACH FOR MEASURING PRIORITY DBPS IN REVERSE OSMOSIS CONCENTRATED DRINKING WATER

    Science.gov (United States)

    Many disinfection by-products (DBPs) are formed when drinking water is chlorinated, but only a few are routinely measured or regulated. Various studies have revealed a plethora of DBPs for which sensitive and quantitative analytical methods have always been a major limiting facto...

  19. A new integrative methodology for desertification studies based on magnetic and short-lived radioisotope measurements

    International Nuclear Information System (INIS)

    Oldfield, F.; Higgitt, S.R.; Maher, B.A.; Appleby, P.G.; Scoullos, M.

    1986-01-01

    The use of mineral magnetic measurements and short-lived radioisotope studies with 210 Pb and 137 Cs is discussed within the ecosystem watershed conceptual framework. Used in conjunction with geomorphological, sedimentological, palaeoecological and geochemical techniques, these methods can form the core of an integrated multidisciplinary study of desertification and erosion processes on all relevant temporal and spatial scales. 30 refs.; 4 figs

  20. Cerebral blood measurements in cerebral vascular disease: methodological and clinical aspects

    International Nuclear Information System (INIS)

    Fieschi, C.; Lenzi, G.L.

    1982-01-01

    This paper is devoted mainly to studies performed on acute cerebral vascular disease with the invasive techniques for the measurement of regional cerebral blood flow (rCBF). The principles of the rCBF method are outlined and the following techniques are described in detail: xenon-133 inhalation method, xenon-133 intravenous method and emission tomography methods. (C.F.)

  1. Complete methodology on generating realistic wind speed profiles based on measurements

    DEFF Research Database (Denmark)

    Gavriluta, Catalin; Spataru, Sergiu; Mosincat, Ioan

    2012-01-01

    , wind modelling for medium and large time scales is poorly treated in the present literature. This paper presents methods for generating realistic wind speed profiles based on real measurements. The wind speed profile is divided in a low- frequency component (describing long term variations...

  2. A methodology to measure the effectiveness of academic recruitment and turnover

    DEFF Research Database (Denmark)

    Abramo, Giovanni; D’Angelo, Ciriaco Andrea; Rosati, Francesco

    2016-01-01

    We propose a method to measure the effectiveness of the recruitment and turnover of professors, in terms of their research performance. The method presented is applied tothe case of Italian universities over the period 2008–2012. The work then analyses thecorrelation between the indicators of eff...

  3. Difference in blood pressure measurements between arms: methodological and clinical implications.

    Science.gov (United States)

    Clark, Christopher E

    2015-01-01

    Differences in blood pressure measurements between arms are commonly encountered in clinical practice. If such differences are not excluded they can delay the diagnosis of hypertension and can lead to poorer control of blood pressure levels. Differences in blood pressure measurements between arms are associated cross sectionally with other signs of vascular disease such as peripheral arterial disease or cerebrovascular disease. Differences are also associated prospectively with increased cardiovascular mortality and morbidity and all cause mortality. Numbers of publications on inter-arm difference are rising year on year, indicating a growing interest in the phenomenon. The prevalence of an inter-arm difference varies widely between reports, and is correlated with the underlying cardiovascular risk of the population studied. Prevalence is also sensitive to the method of measurement used. This review discusses the prevalence of an inter-arm difference in different populations and addresses current best practice for the detection and the measurement of a difference. The evidence for clinical and for vascular associations of an inter-arm difference is presented in considering the emerging role of an inter-arm blood pressure difference as a novel risk factor for increased cardiovascular morbidity and mortality. Competing aetiological explanations for an inter-arm difference are explored, and gaps in our current understanding of this sign, along with areas in need of further research, are considered.

  4. Minimizing measurement uncertainties of coniferous needle-leaf optical properties, part I: methodological review

    NARCIS (Netherlands)

    Yanez Rausell, L.; Schaepman, M.E.; Clevers, J.G.P.W.; Malenovsky, Z.

    2014-01-01

    Optical properties (OPs) of non-flat narrow plant leaves, i.e., coniferous needles, are extensively used by the remote sensing community, in particular for calibration and validation of radiative transfer models at leaf and canopy level. Optical measurements of such small living elements are,

  5. Measuring method to impulse neutron scattering background in complicated ambient condition

    International Nuclear Information System (INIS)

    Tang Zhangkui; Peng Taiping; Tang Zhengyuan; Liu Hangang; Hu Mengchun; Fan Juan

    2004-01-01

    This paper introduced a measuring method and calculative formula about impulse neutron scattering background in complicated ambient condition. The experiment had been done in the lab, and the factors to affect measurement conclusion were analysised. (authors)

  6. Metabolic tumour volumes measured at staging in lymphoma: methodological evaluation on phantom experiments and patients

    Energy Technology Data Exchange (ETDEWEB)

    Meignan, Michel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Paris-Est University, Service de Medecine Nucleaire, EAC CNRS 7054, Hopital Henri Mondor AP-HP, Creteil (France); Sasanelli, Myriam; Itti, Emmanuel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Casasnovas, Rene Olivier [CHU Le Bocage, Department of Hematology, Dijon (France); Luminari, Stefano [University of Modena and Reggio Emilia, Department of Diagnostic, Clinic and Public Health Medicine, Modena (Italy); Fioroni, Federica [Santa Maria Nuova Hospital-IRCCS, Department of Medical Physics, Reggio Emilia (Italy); Coriani, Chiara [Santa Maria Nuova Hospital-IRCCS, Department of Radiology, Reggio Emilia (Italy); Masset, Helene [Henri Mondor Hospital, Department of Radiophysics, Creteil (France); Gobbi, Paolo G. [University of Pavia, Department of Internal Medicine and Gastroenterology, Fondazione IRCCS Policlinico San Matteo, Pavia (Italy); Merli, Francesco [Santa Maria Nuova Hospital-IRCCS, Department of Hematology, Reggio Emilia (Italy); Versari, Annibale [Santa Maria Nuova Hospital-IRCCS, Department of Nuclear Medicine, Reggio Emilia (Italy)

    2014-06-15

    The presence of a bulky tumour at staging on CT is an independent prognostic factor in malignant lymphomas. However, its prognostic value is limited in diffuse disease. Total metabolic tumour volume (TMTV) determined on {sup 18}F-FDG PET/CT could give a better evaluation of the total tumour burden and may help patient stratification. Different methods of TMTV measurement established in phantoms simulating lymphoma tumours were investigated and validated in 40 patients with Hodgkin lymphoma and diffuse large B-cell lymphoma. Data were processed by two nuclear medicine physicians in Reggio Emilia and Creteil. Nineteen phantoms filled with {sup 18}F-saline were scanned; these comprised spherical or irregular volumes from 0.5 to 650 cm{sup 3} with tumour-to-background ratios from 1.65 to 40. Volumes were measured with different SUVmax thresholds. In patients, TMTV was measured on PET at staging by two methods: volumes of individual lesions were measured using a fixed 41 % SUVmax threshold (TMTV{sub 41}) and a variable visually adjusted SUVmax threshold (TMTV{sub var}). In phantoms, the 41 % threshold gave the best concordance between measured and actual volumes. Interobserver agreement was almost perfect. In patients, the agreement between the reviewers for TMTV{sub 41} measurement was substantial (ρ {sub c} = 0.986, CI 0.97 - 0.99) and the difference between the means was not significant (212 ± 218 cm{sup 3} for Creteil vs. 206 ± 219 cm{sup 3} for Reggio Emilia, P = 0.65). By contrast the agreement was poor for TMTV{sub var}. There was a significant direct correlation between TMTV{sub 41} and normalized LDH (r = 0.652, CI 0.42 - 0.8, P <0.001). Higher disease stages and bulky tumour were associated with higher TMTV{sub 41}, but high TMTV{sub 41} could be found in patients with stage 1/2 or nonbulky tumour. Measurement of baseline TMTV in lymphoma using a fixed 41% SUVmax threshold is reproducible and correlates with the other parameters for tumour mass evaluation

  7. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2013-01-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  8. Current status and methodological aspects on the measurement of glomerular filtration rate

    International Nuclear Information System (INIS)

    Froissart, M.; Hignette, C.; Kolar, P.; Prigent, A.; Paillard, M.

    1995-01-01

    Determination of the glomerular filtration rate (GFR) contribute to our understanding of kidney physiology and pathophysiology . Moreover, determination of GFR is of clinical importance in assessing the diagnosis and the progression of renal disease. The purpose of this article is to review the technical performance and results of GFR measurements, including the classical inulin clearance technique and more recent alternative clearance techniques using radioisotope-labelled filtration markers, bolus infusion and spontaneous bladder emptying. Some simplified techniques avoiding urinary collection are also described. We conclude that estimation of GFR from renal and in some cases plasmatic clearances is accurate and more convenient than the classical inulin clearance technique. Such measurements of GFR should be included both in clinical practice and clinical research. (authors). 80 refs., 5 figs., 1 tab

  9. Critical experiments, measurements, and analyses to establish a crack arrest methodology for nuclear pressure vessel steels

    International Nuclear Information System (INIS)

    Hahn, G.T.

    1977-01-01

    Substantial progress was made in three important areas: crack propagation and arrest theory, two-dimensional dynamic crack propagation analyses, and a laboratory test method for the material property data base. The major findings were as follows: Measurements of run-arrest events lent support to the dynamic, energy conservation theory of crack arrest. A two-dimensional, dynamic, finite-difference analysis, including inertia forces and thermal gradients, was developed. The analysis was successfully applied to run-arrest events in DCB (double-cantilever-beam) and SEN (single-edge notched) test pieces. A simplified procedure for measuring K/sub D/ and K/sub Im/ values with ordinary and duplex DCB specimens was demonstrated. The procedure employs a dynamic analysis of the crack length at arrest and requires no special instrumentation. The new method was applied to ''duplex'' specimens to measure the large K/sub D/ values displayed by A533B steel above the nil-ductility temperature. K/sub D/ crack velocity curves and K/sub Im/ values of two heats of A533B steel and the corresponding values for the plane strain fracture toughness associated with static initiation (K/sub Ic/), dynamic initiation (K/sub Id/), and the static stress intensity at crack arrest (K/sub Ia/) were measured. Possible relations among these toughness indices are identified. During the past year the principal investigators of the participating groups reached agreement on a crack arrest theory appropriate for the pressure vessel problem. 7 figures

  10. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    Science.gov (United States)

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  11. Bone mineral content measurement in small infants by single-photon absorptiometry: current methodologic issues

    International Nuclear Information System (INIS)

    Steichen, J.J.; Asch, P.A.; Tsang, R.C.

    1988-01-01

    Single-photon absorptiometry (SPA), developed in 1963 and adapted for infants by Steichen et al. in 1976, is an important tool to quantitate bone mineralization in infants. Studies of infants in which SPA was used include studies of fetal bone mineralization and postnatal bone mineralization in very low birth weight infants. The SPA technique has also been used as a research tool to investigate longitudinal bone mineralization and to study the effect of nutrition and disease processes such as rickets or osteopenia of prematurity. At present, it has little direct clinical application for diagnosing bone disease in single patients. The bones most often used to measure bone mineral content (BMC) are the radius, the ulna, and, less often, the humerus. The radius appears to be preferred as a suitable bone to measure BMC in infants. It is easily accessible; anatomic reference points are easily palpated and have a constant relationship to the radial mid-shaft site; soft tissue does not affect either palpation of anatomic reference points or BMC quantitation in vivo. The peripheral location of the radius minimizes body radiation exposure. Trabecular and cortical bone can be measured separately. Extensive background studies exist on radial BMC in small infants. Most important, the radius has a relatively long zone of constant BMC. Finally, SPA for BMC in the radius has a high degree of precision and accuracy. 61 references

  12. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Directory of Open Access Journals (Sweden)

    Hamid Reza Marateb

    2014-01-01

    Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.

  13. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Science.gov (United States)

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  14. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    Science.gov (United States)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  15. Dual photon absorptiometry measurement of the lumbar bone mineral content. Methodology - Reproductibility - Normal values

    International Nuclear Information System (INIS)

    Braillon, P.; Duboeuf, F.; Delmas, P.D.; Meunier, P.J.

    1987-01-01

    Measurements were made with a DPA apparatus (Novo Lab 22a) on different phantoms and on volunteers in an attempt to evaluate the system precision. The reproductibility was found in the range of 0.98 to 4.10 % in the case of in vitro measurements, depending on the geometry of the phantoms used, and in the range of 1.6 to 2.94 % for volunteers after repositioning. Secondly, the BMD in the lumbar spine of normal women and normal men was estimated. In control females, the BMD is well fitted to the age by a cubic regression. The maximum value of the BMD is found in this case at the age of 31.5 and the maximum rate of bone loss takes place at 57. Total bone loss between 31.5 and the elderly is about 32 %. In control males, results are more scattered and are represented by a simple linear regression. The average mineral loss between 30 and 80 years is 11.5 % in this area of measurement [fr

  16. Methodological issues in systematic reviews of headache trials: adapting historical diagnostic classifications and outcome measures to present-day standards.

    Science.gov (United States)

    McCrory, Douglas C; Gray, Rebecca N; Tfelt-Hansen, Peer; Steiner, Timothy J; Taylor, Frederick R

    2005-05-01

    Recent efforts to make headache diagnostic classification and clinical trial methodology more consistent provide valuable advice to trialists generating new evidence on effectiveness of treatments for headache; however, interpreting older trials that do not conform to new standards remains problematic. Systematic reviewers seeking to utilize historical data can adapt currently recommended diagnostic classification and clinical trial methodological approaches to interpret all available data relative to current standards. In evaluating study populations, systematic reviewers can: (i) use available data to attempt to map study populations to diagnoses in the new International Classification of Headache Disorders; and (ii) stratify analyses based on the extent to which study populations are precisely specified. In evaluating outcome measures, systematic reviewers can: (i) summarize prevention studies using headache frequency, incorporating headache index in a stratified analysis if headache frequency is not available; (ii) summarize acute treatment studies using pain-free response as reported in directly measured headache improvement or headache severity outcomes; and (iii) avoid analysis of recurrence or relapse data not conforming to the sustained pain-free response definition.

  17. Comparing Classic and Interval Analytical Hierarchy Process Methodologies for Measuring Area-Level Deprivation to Analyze Health Inequalities.

    Science.gov (United States)

    Cabrera-Barona, Pablo; Ghorbanzadeh, Omid

    2018-01-16

    Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas.

  18. Comparison of noise power spectrum methodologies in measurements by using various electronic portal imaging devices in radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Son, Soon Yong [Dept. of Radiological Technology, Wonkwang Health Science University, Iksan (Korea, Republic of); Choi, Kwan Woo [Dept. of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [Dept. of Radiological Technology, Baekseok Culture University College, Cheonan (Korea, Republic of); Kwon, Kyung Tae [Dep. of Radiological Technology, Dongnam Health University, Suwon (Korea, Republic of); Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Lee, Young Ah; Son, Jin Hyun; Min, Jung Whan [Shingu University College, Sungnam (Korea, Republic of)

    2016-03-15

    The noise power spectrum (NPS) is one of the most general methods for measuring the noise amplitude and the quality of an image acquired from a uniform radiation field. The purpose of this study was to compare different NPS methodologies by using megavoltage X-ray energies. The NPS evaluation methods in diagnostic radiation were applied to therapy using the International Electro-technical Commission standard (IEC 62220-1). Various radiation therapy (RT) devices such as TrueBeamTM(Varian), BEAMVIEWPLUS(Siemens), iViewGT(Elekta) and ClinacR iX (Varian) were used. In order to measure the region of interest (ROI) of the NPS, we used the following four factors: the overlapping impact, the non-overlapping impact, the flatness and penumbra. As for NPS results, iViewGT(Elekta) had the higher amplitude of noise, compared to BEAMVIEWPLUS (Siemens), TrueBeamTM(Varian) flattening filter, ClinacRiXaS1000(Varian) and TrueBeamTM(Varian) flattening filter free. The present study revealed that various factors could be employed to produce megavoltage imaging (MVI) of the NPS and as a baseline standard for NPS methodologies control in MVI.

  19. Fiber-Optic Temperature and Pressure Sensors Applied to Radiofrequency Thermal Ablation in Liver Phantom: Methodology and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2015-01-01

    Full Text Available Radiofrequency thermal ablation (RFA is a procedure aimed at interventional cancer care and is applied to the treatment of small- and midsize tumors in lung, kidney, liver, and other tissues. RFA generates a selective high-temperature field in the tissue; temperature values and their persistency are directly related to the mortality rate of tumor cells. Temperature measurement in up to 3–5 points, using electrical thermocouples, belongs to the present clinical practice of RFA and is the foundation of a physical model of the ablation process. Fiber-optic sensors allow extending the detection of biophysical parameters to a vast plurality of sensing points, using miniature and noninvasive technologies that do not alter the RFA pattern. This work addresses the methodology for optical measurement of temperature distribution and pressure using four different fiber-optic technologies: fiber Bragg gratings (FBGs, linearly chirped FBGs (LCFBGs, Rayleigh scattering-based distributed temperature system (DTS, and extrinsic Fabry-Perot interferometry (EFPI. For each instrument, methodology for ex vivo sensing, as well as experimental results, is reported, leading to the application of fiber-optic technologies in vivo. The possibility of using a fiber-optic sensor network, in conjunction with a suitable ablation device, can enable smart ablation procedure whereas ablation parameters are dynamically changed.

  20. Guidelines for measuring the physical, chemical, and biological condition of wilderness ecosystems

    Science.gov (United States)

    Douglas G Fox; J. Christopher Bernabo; Betsy Hood

    1987-01-01

    Guidelines include a large number of specific measures to characterize the existing condition of wilderness resources. Measures involve the atmospheric environment, water chemistry and biology, geology and soils, and flora. Where possible, measures are coordinated with existing long-term monitoring programs. Application of the measures will allow more effective...

  1. Translation and linguistic validation of the Pediatric Patient-Reported Outcomes Measurement Information System measures into simplified Chinese using cognitive interviewing methodology.

    Science.gov (United States)

    Liu, Yanyan; Hinds, Pamela S; Wang, Jichuan; Correia, Helena; Du, Shizheng; Ding, Jian; Gao, Wen Jun; Yuan, Changrong

    2013-01-01

    The Pediatric Patient-Reported Outcomes Measurement Information System (PROMIS) measures were developed using modern measurement theory and tested in a variety of settings to assess the quality of life, function, and symptoms of children and adolescents experiencing a chronic illness and its treatment. Developed in English, this set of measures had not been translated into Chinese. The objective of this study was to develop the Chinese version of the Pediatric PROMIS measures (C-Ped-PROMIS), specifically 8 short forms, and to pretest the translated measures in children and adolescents through cognitive interviewing methodology. The C-Ped-PROMIS was developed following the standard Functional Assessment of Chronic Illness Therapy Translation Methodology. Bilingual teams from the United States and China reviewed the translation to develop a provisional version, which was then pretested with cognitive interview by probing 10 native Chinese-speaking children aged 8 to 17 years in China. The translation was finalized by the bilingual teams. Most items, response options, and instructions were well understood by the children, and some revisions were made to address patient's comments during the cognitive interview. The results indicated that the C-Ped-PROMIS items were semantically and conceptually equivalent to the original. Children aged 8 to 17 years in China were able to comprehend these measures and express their experience and feelings about illness or their life. The C-Ped-PROMIS is available for psychometric validation. Future work will be directed at translating the rest of the item banks, calibrating them and creating a Chinese final version of the short forms.

  2. Batch versus column modes for the adsorption of radioactive metal onto rice husk waste: conditions optimization through response surface methodology.

    Science.gov (United States)

    Kausar, Abida; Bhatti, Haq Nawaz; Iqbal, Munawar; Ashraf, Aisha

    2017-09-01

    Batch and column adsorption modes were compared for the adsorption of U(VI) ions using rice husk waste biomass (RHWB). Response surface methodology was employed for the optimization of process variables, i.e., (pH (A), adsorbent dose (B), initial ion concentration (C)) in batch mode. The B, C and C 2 affected the U(VI) adsorption significantly in batch mode. The developed quadratic model was found to be validated on the basis of regression coefficient as well as analysis of variance. The predicted and actual values were found to be correlated well, with negligible residual value, and B, C and C 2 were significant terms. The column study was performed considering bed height, flow rate and initial metal ion concentration, and adsorption efficiency was evaluated through breakthrough curves and bed depth service time and Thomas models. Adsorption was found to be dependent on bed height and initial U(VI) ion concentration, and flow rate decreased the adsorption capacity. Thomas models fitted well to the U(VI) adsorption onto RHWB. Results revealed that RHWB has potential to remove U(VI) ions and batch adsorption was found to be efficient versus column mode.

  3. LMTD Design Methodology Assessment of Spiral Tube Heat Exchanger under the S-CO2 cycle operating condition

    International Nuclear Information System (INIS)

    Jung, Hwa Young; Lee, Jeong Ik; Ahn, Yoon Han

    2013-01-01

    The advantages of PCHE are compact high pressure difference endurance high temperature operation. However, PCHE is quite expensive and the resistance to the fast thermal cycling is questionable. In order to overcome this problem, the Korea Advanced Institute of Science and Technology (KAIST) research team is considering an alternative for the PCHE. Currently KAIST research team is using a Spiral Tube Heat Exchanger (STHE) of Sentry Equipment Corp. as a pre cooler in the SCO 2 PE facility. A STHE is relatively cheap but the operating pressure and temperature are acceptable for utilizing it as a pre cooler. A STHE is consisted of spiral shaped tubes (hot side i.e. S-CO 2 ) immersed in a shell (cold side i.e. water). This study is aimed at whether the logarithmic mean temperature difference (LMTD) heat exchanger design methodology is acceptable for designing the S-CO 2 cycle pre cooler. This is because the LMTD method usually assumes a constant specific heat, but the pre cooler in the S-CO 2 cycle operates at the nearest point to the critical point where a dramatic change in properties is expected. Experimentally obtained data are compared to the vendor provided technical specification based on the LMTD method. The detailed specifications provided by the vendor are listed in Table 1

  4. LMTD Design Methodology Assessment of Spiral Tube Heat Exchanger under the S-CO{sub 2} cycle operating condition

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hwa Young; Lee, Jeong Ik; Ahn, Yoon Han [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2013-05-15

    The advantages of PCHE are compact high pressure difference endurance high temperature operation. However, PCHE is quite expensive and the resistance to the fast thermal cycling is questionable. In order to overcome this problem, the Korea Advanced Institute of Science and Technology (KAIST) research team is considering an alternative for the PCHE. Currently KAIST research team is using a Spiral Tube Heat Exchanger (STHE) of Sentry Equipment Corp. as a pre cooler in the SCO{sub 2}PE facility. A STHE is relatively cheap but the operating pressure and temperature are acceptable for utilizing it as a pre cooler. A STHE is consisted of spiral shaped tubes (hot side i.e. S-CO{sub 2}) immersed in a shell (cold side i.e. water). This study is aimed at whether the logarithmic mean temperature difference (LMTD) heat exchanger design methodology is acceptable for designing the S-CO{sub 2} cycle pre cooler. This is because the LMTD method usually assumes a constant specific heat, but the pre cooler in the S-CO{sub 2} cycle operates at the nearest point to the critical point where a dramatic change in properties is expected. Experimentally obtained data are compared to the vendor provided technical specification based on the LMTD method. The detailed specifications provided by the vendor are listed in Table 1.

  5. An EPR methodology for measuring the London penetration depth for the ceramic superconductors

    Science.gov (United States)

    Rakvin, B.; Mahl, T. A.; Dalal, N. S.

    1990-01-01

    The use is discussed of electron paramagnetic resonance (EPR) as a quick and easily accessible method for measuring the London penetration depth, lambda for the high T(sub c) superconductors. The method utilizes the broadening of the EPR signal, due to the emergence of the magnetic flux lattice, of a free radical adsorbed on the surface of the sample. The second moment, of the EPR signal below T(sub c) is fitted to the Brandt equation for a simple triangular lattice. The precision of this method compares quite favorably with those of the more standard methods such as micro sup(+)SR, Neutron scattering, and magnetic susceptibility.

  6. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    Science.gov (United States)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  7. In situ measurement of heavy metals in water using portable EDXRF and APDC pre-concentration methodology

    International Nuclear Information System (INIS)

    Melquiades, Fabio L.; Parreira, Paulo S.; Appoloni, Carlos R.; Silva, Wislley D.; Lopes, Fabio

    2007-01-01

    With the objective of identify and quantify metals in water and obtain results in the sampling place, Energy Dispersive X-Ray Fluorescence (EDXRF) methodology with a portable equipment was employed. In this work are presented metal concentration results for water samples from two points of Londrina city. The analysis were in situ, measuring in natura water and samples pre-concentrated in membranes. The work consisted on the use of a portable X-ray tube to excite the samples and a Si-Pin detector with the standard data acquisition electronics to register the spectra. The samples were filtered in membranes for suspended particulate matter retention. After this APDC precipitation methodology was applied for sample pre-concentration with posterior filtering in membranes. For in natura samples were found concentrations of total iron in Capivara River 254 ± 30 mg L -1 and at Igapo Lake 63 ± 9 mg L -1 . For membrane measurements, the results for particulate suspended matter at Capivara River were, in mg L -1 : 31.0 ± 2.5 (Fe), 0.17 ± 0.03 (Cu) and 0.93 ± 0.08 (Pb) and for dissolved iron was 0.038 ± 0.004. For Igapo Lake just Fe was quantified: 1.66 ±0.19 mg L -1 for particulate suspended iron and 0.79 ± 0.11 mg L -1 for dissolved iron. In 4 h of work at field it was possible to filter 14 membranes and measure around 16 samples. The performance of the equipment was very good and the results are satisfactory for in situ measurements employing a portable instrument. (author)

  8. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    Science.gov (United States)

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA

  9. Methodologically controlled variations in laboratory and field pH measurements in waterlogged soils

    DEFF Research Database (Denmark)

    Elberling, Bo; Matthiesen, Henning

    2007-01-01

    artefacts is critical. But the study includes agricultural and forest soils for comparison. At a waterlogged site, Laboratory results were compared with three different field methods: calomel pH probes inserted in the soil from pits, pH measurements of soil solution extracted from the soil, and pH profiles...... using a solid-state pH electrode pushed into the soil from the surface. Comparisons between in situ and laboratory methods revealed differences of more than 1 pH unit. The content of dissolved ions in soil solution and field observations of O2 and CO2 concentrations were used in the speciation model...... PHREEQE in order to predict gas exchange processes. Changes in pH in soil solution following equilibrium in the laboratory could be explained mainly by CO2 degassing. Only soil pH measured in situ using either calomel or solid-state probes inserted directly into the soil was not affected by gas exchange...

  10. The organizational stress measure: an integrated methodology for assessing job-stress and targeting organizational interventions.

    Science.gov (United States)

    Spurgeon, Peter; Mazelan, Patti; Barwell, Fred

    2012-02-01

    This paper briefly describes the OSM (Organizational Stress Measure) which was developed over a decade ago and has evolved to become a well-established practical method not only for assessing wellbeing at work but also as a cost-effective strategy to tackle workplace stress. The OSM measures perceived organizational pressures and felt individual strains within the same instrument, and provides a rich and subtle picture of both the organizational culture and the personal perspectives of the constituent staff groups. There are many types of organizational pressure that may impact upon the wellbeing and potential effectiveness of staff including skill shortages, ineffective strategic planning and poor leadership, and these frequently result in reduced performance, absenteeism, high turnover and poor staff morale. These pressures may increase the probability of some staff reacting negatively and research with the OSM has shown that increased levels of strain for small clusters of staff may be a leading indicator of future organizational problems. One of the main benefits of using the OSM is the ability to identify 'hot-spots', where organizational pressures are triggering high levels of personal strain in susceptible clusters of staff. In this way, the OSM may act as an 'early warning alarm' for potential organizational problems.

  11. Practical appraisal of sustainable development-Methodologies for sustainability measurement at settlement level

    International Nuclear Information System (INIS)

    Moles, Richard; Foley, Walter; Morrissey, John; O'Regan, Bernadette

    2008-01-01

    This paper investigates the relationships between settlement size, functionality, geographic location and sustainable development. Analysis was carried out on a sample of 79 Irish settlements, located in three regional clusters. Two methods were selected to model the level of sustainability achieved in settlements, namely, Metabolism Accounting and Modelling of Material and Energy Flows (MA) and Sustainable Development Index Modelling. MA is a systematic assessment of the flows and stocks of material within a system defined in space and time. The metabolism of most settlements is essentially linear, with resources flowing through the urban system. The objective of this research on material and energy flows was to provide information that might aid in the development of a more circular pattern of urban metabolism, vital to sustainable development. In addition to MA, a set of forty indicators were identified and developed. These target important aspects of sustainable development: transport, environmental quality, equity and quality of life issues. Sustainability indices were derived through aggregation of indicators to measure dimensions of sustainable development. Similar relationships between settlement attributes and sustainability were found following both methods, and these were subsequently integrated to provide a single measure. Analysis identified those attributes of settlements preventing, impeding or promoting progress towards sustainability

  12. Theoretical and methodological approaches to the problem of students' health in algorithms of recreation measures.

    Directory of Open Access Journals (Sweden)

    Zaytzev V.P.

    2011-01-01

    Full Text Available In the article is expounded about health and its basic constituents: physical, psychical and social. Description is given to physical development of man and its physical preparedness, physical form and trained, physical activity and functional readiness. Opinions and looks of scientists, teachers and doctors are presented on determination of health of man, including student. All of these symptoms are taken into account from point of recreation measures. Description of determination of recreation, physical recreation and other concept of recreation systems is given. It is shown historical information about both determination of health and recreation, and also participation of higher educational establishments of physical culture of Ukraine, Russia and Poland, which is working under this problem, in determination of health and recreation.

  13. Methodological considerations for global analysis of cellular FLIM/FRET measurements

    Science.gov (United States)

    Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.

    2012-02-01

    Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.

  14. Methodology of functional and cost approach to improvement of a control system of corporation in modern conditions

    Directory of Open Access Journals (Sweden)

    Zlygostev A.N.

    2017-09-01

    Full Text Available according to the author, the corporate structure of economic entities is objective need today owing to the fact that the created conditions of acceleration of economic development of the country often demand association of several legal entities and individuals conducting an entrepreneurial activity to achieve the required result. At the same time, the corporation is not independent legal form and its activity is not regulated by the Civil Code of the Russian Federation. In this regard, there is a set of various interrelations and the relations of economic entities which are based on a basis of self-government and membership of participants of corporation on long-term contractual and contract conditions. This, in turn, predetermines the search of new, most effective forms and methods of formation and development of control systems of corporations including increases in level of their personnel potential.

  15. Development of a cognitive bias methodology for measuring low mood in chimpanzees

    Directory of Open Access Journals (Sweden)

    Melissa Bateson

    2015-06-01

    Full Text Available There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure ‘pessimistic’ bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes. The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107–240 trials, and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess

  16. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    Science.gov (United States)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  17. Methodological review: measured and reported congruence between preferred and actual place of death.

    Science.gov (United States)

    Bell, C L; Somogyi-Zalud, E; Masaki, K H

    2009-09-01

    Congruence between preferred and actual place of death is an important palliative care outcome reported in the literature. We examined methods of measuring and reporting congruence to highlight variations impairing cross-study comparisons. Medline, PsychInfo, CINAHL, and Web of Science were systematically searched for clinical research studies examining patient preference and congruence as an outcome. Data were extracted into a matrix, including purpose, reported congruence, and method for eliciting preference. Studies were graded for quality. Using tables of preferred versus actual places of death, an overall congruence (total met preferences out of total preferences) and a kappa statistic of agreement were determined for each study. Twelve studies were identified. Percentage of congruence was reported using four different definitions. Ten studies provided a table or partial table of preferred versus actual deaths for each place. Three studies provided kappa statistics. No study achieved better than moderate agreement when analysed using kappa statistics. A study which elicited ideal preference reported the lowest agreement, while longitudinal studies reporting final preferred place of death yielded the highest agreement (moderate agreement). Two other studies of select populations also yielded moderate agreement. There is marked variation in methods of eliciting and reporting congruence, even among studies focused on congruence as an outcome. Cross-study comparison would be enhanced by the use of similar questions to elicit preference, tables of preferred versus actual places of death, and kappa statistics of agreement.

  18. Measuring P availability in soils fertilized with water-soluble P fertilizers using 32P methodologies

    International Nuclear Information System (INIS)

    McLaughlin, M.J.

    2002-01-01

    Isotope exchange kinetics was used in conjunction with standard procedures for assessing soil P status in soils fertilized with soluble phosphatic fertilizers. Soil samples were collected before fertilizer application in year 1 (one) from 23 of the 30 sites of the National Reactive Phosphate Rock project. Soil phosphorus test values were plotted against indices of pasture response to applied fertilizer, to assess the effectiveness of the various soil tests to predict site responsiveness to applied fertilizer. Isotopically exchangeable P was only weakly related to other measures of available P, with resin P having the best relationship with E values. In some samples, very large values for isotopically exchangeable P (E values) were determined in relation to P extractable by all reagents. Examination of the data however, revealed that all the samples with large E values in relation to extractable P had very low equilibrium concentrations of solution P and high buffering capacities. The best soil test, Bray 1, could account for only 50% of the variation in plant responsiveness to applied fertilizer, with Olsen and Resin tests slightly worse at 41% and the isotopic procedure at 39%. (author)

  19. A methodology to measure cervical vertebral bone maturation in a sample from low-income children.

    Science.gov (United States)

    Aguiar, Luciana Barreto Vieira; Caldas, Maria de Paula; Haiter Neto, Francisco; Ambrosano, Glaucia Maria Bovi

    2013-01-01

    This study evaluated the applicability of the regression method for determining vertebral age developed by Caldas et al. (2007) by testing this method in children from low-income families of the rural zone. The sample comprised cephalometric and hand-wrist radiographs of 76 boys and 64 girls aged 7.0 to 14.9 years living in a medium-sized city in the desert region of the northeastern region of Brazil, with an HDI of 0.678. C3 and C4 vertebrae were traced and measured on cephalometric radiographs to estimate the bone age. The average age, average hand-wrist age and average error estimated for girls and boys were, respectively, 10.62 and 10.44 years, 11.28 and 10.57 years, and 1.42 and 1.18 years. Based on these results, the formula proposed by Caldas et al. (2007) was not applicable to the studied population, and new multiple regression models were developed to obtain the children's vertebral bone age accurately.

  20. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study.

    Science.gov (United States)

    Mokkink, Lidwine B; Terwee, Caroline B; Patrick, Donald L; Alonso, Jordi; Stratford, Paul W; Knol, Dirk L; Bouter, Lex M; de Vet, Henrica C W

    2010-05-01

    Aim of the COSMIN study (COnsensus-based Standards for the selection of health status Measurement INstruments) was to develop a consensus-based checklist to evaluate the methodological quality of studies on measurement properties. We present the COSMIN checklist and the agreement of the panel on the items of the checklist. A four-round Delphi study was performed with international experts (psychologists, epidemiologists, statisticians and clinicians). Of the 91 invited experts, 57 agreed to participate (63%). Panel members were asked to rate their (dis)agreement with each proposal on a five-point scale. Consensus was considered to be reached when at least 67% of the panel members indicated 'agree' or 'strongly agree'. Consensus was reached on the inclusion of the following measurement properties: internal consistency, reliability, measurement error, content validity (including face validity), construct validity (including structural validity, hypotheses testing and cross-cultural validity), criterion validity, responsiveness, and interpretability. The latter was not considered a measurement property. The panel also reached consensus on how these properties should be assessed. The resulting COSMIN checklist could be useful when selecting a measurement instrument, peer-reviewing a manuscript, designing or reporting a study on measurement properties, or for educational purposes.

  1. The Ocean Colour Climate Change Initiative: I. A Methodology for Assessing Atmospheric Correction Processors Based on In-Situ Measurements

    Science.gov (United States)

    Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic; hide

    2015-01-01

    The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.

  2. Investigation of Radiation Protection Methodologies for Radiation Therapy Shielding Using Monte Carlo Simulation and Measurement

    Science.gov (United States)

    Tanny, Sean

    The advent of high-energy linear accelerators for dedicated medical use in the 1950's by Henry Kaplan and the Stanford University physics department began a revolution in radiation oncology. Today, linear accelerators are the standard of care for modern radiation therapy and can generate high-energy beams that can produce tens of Gy per minute at isocenter. This creates a need for a large amount of shielding material to properly protect members of the public and hospital staff. Standardized vault designs and guidance on shielding properties of various materials are provided by the National Council on Radiation Protection (NCRP) Report 151. However, physicists are seeking ways to minimize the footprint and volume of shielding material needed which leads to the use of non-standard vault configurations and less-studied materials, such as high-density concrete. The University of Toledo Dana Cancer Center has utilized both of these methods to minimize the cost and spatial footprint of the requisite radiation shielding. To ensure a safe work environment, computer simulations were performed to verify the attenuation properties and shielding workloads produced by a variety of situations where standard recommendations and guidance documents were insufficient. This project studies two areas of concern that are not addressed by NCRP 151, the radiation shielding workload for the vault door with a non-standard design, and the attenuation properties of high-density concrete for both photon and neutron radiation. Simulations have been performed using a Monte-Carlo code produced by the Los Alamos National Lab (LANL), Monte Carlo Neutrons, Photons 5 (MCNP5). Measurements have been performed using a shielding test port designed into the maze of the Varian Edge treatment vault.

  3. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications.

    Science.gov (United States)

    Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth

    2014-05-01

    In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method

  4. Social and economic well-being in the conditions of the urban space: the evolution of methodological approaches in the historical urban studies

    Directory of Open Access Journals (Sweden)

    Ageev Ilya

    2016-01-01

    Full Text Available A city as a type of a human settlement is characterized by high population density, welldeveloped infrastructure, comfortable living conditions. However, a city is a source of social problems due to high population density, limited resources and conflicts between indigenous population and newcomers. The article analyzes the development of research about the city, provides an assessment of the scope of the historical urban studies in the development of solutions to contemporary problems of urban space. Methodological resource of historical urban studies allows fully exploring the city as a set of historically interconnected spaces and social processes. The analysis of the problem field of historical urban studies at various stages of its formation allowed tracing the evolution of ideas about the city as an object of scientific knowledge, to identify future prospects of research on conditions of Russian urban development, to improve the comfort of living in them.

  5. Methodologic considerations in the measurement of glycemic index: glycemic response to rye bread, oatmeal porridge, and mashed potato.

    Science.gov (United States)

    Hätönen, Katja A; Similä, Minna E; Virtamo, Jarmo R; Eriksson, Johan G; Hannila, Marja-Leena; Sinkko, Harri K; Sundvall, Jouko E; Mykkänen, Hannu M; Valsta, Liisa M

    2006-11-01

    Methodologic choices affect measures of the glycemic index (GI). The effects on GI values of blood sampling site, reference food type, and the number of repeat tests have been insufficiently determined. The objective was to study the effect of methodologic choices on GI values. Comparisons were made between venous and capillary blood sampling and between glucose and white bread as the reference food. The number of tests needed for the reference food was assessed. Rye bread, oatmeal porridge, and instant mashed potato were used as the test foods. Twelve healthy volunteers were served each test food once and both reference foods 3 times at 1-wk intervals in a random order after they had fasted overnight. Capillary and venous blood samples were drawn at intervals for 3 h after each study meal. GIs and their CVs based on capillary samples were lower than those based on venous samples. Two tests of glucose solution as the reference provided stable capillary GIs for the test foods. The capillary GIs did not differ significantly when white bread was used as the reference 1, 2, or 3 times, but the variation was lower when tests were performed 2 and 3 times. Capillary GIs with white bread as the reference were 1.3 times as high as those with glucose as the reference. The capillary GIs of rye bread, oatmeal porridge, and mashed potato were 77, 74, and 80, respectively, with glucose as the reference. Capillary blood sampling should be used in the measurement of GI, and reference tests with glucose or white bread should be performed at least twice.

  6. Measurement and modeling the coefficient of restitution of char particles under simulated entrained flow gasifier conditions

    Science.gov (United States)

    Gibson, LaTosha M.

    predict the coefficient of restitution (COR) which is the ratio of the rebound velocity to the impacting velocity (which is a necessary boundary condition for Discrete Phase Models). However, particle-wall impact models do not use actual geometries of char particles and motion of char particles due to gasifier operating conditions. This work attempts to include the surface geometry and rotation of the particles. To meet the objectives of this work, the general methodology used for this work involved (1) determining the likelihood of particle becoming entrapped, (2) assessing the limitations of particle-wall impact models for the COR through cold flow experiments in order to adapt them to the non-ideal conditions (surface and particle geometry) within a gasifier, (3) determining how to account for the influence of the carbon and the ash composition in the determination of the sticking probability of size fractions and specific gravities within a PSD and within the scope of particle wall impact models, and (4) using a methodology that quantifies the sticking probability (albeit a criterion or parameter) to predict the partitioning of a PSD into slag and flyash based on the proximate analysis. In this study, through sensitivity analysis the scenario for particle becoming entrapped within a slag layer was ruled out. Cold flow educator experiments were performed to measure the COR. Results showed a variation in the coefficient of restitution as a function of rebound angle due rotation of particles from the educator prior to impact. The particles were then simply dropped in "drop" experiments (without educator) to determine the influence of sphericity on particle rotation and therefore, the coefficient of restitution. The results showed that in addition to surface irregularities, the particle shape and orientation of the particle prior to impacting the target surface contributed to this variation of the coefficient of restitution as a function of rebounding angle. Oblique

  7. Local conditional entropy in measure for covers with respect to a fixed partition

    Science.gov (United States)

    Romagnoli, Pierre-Paul

    2018-05-01

    In this paper we introduce two measure theoretical notions of conditional entropy for finite measurable covers conditioned to a finite measurable partition and prove that they are equal. Using this we state a local variational principle with respect to the notion of conditional entropy defined by Misiurewicz (1976 Stud. Math. 55 176–200) for the case of open covers. This in particular extends the work done in Romagnoli (2003 Ergod. Theor. Dynam. Syst. 23 1601–10), Glasner and Weiss (2006 Handbook of Dynamical Systems vol 1B (Amsterdam: Elsevier)) and Huang et al (2006 Ergod. Theor. Dynam. Syst. 26 219–45).

  8. Safety analysis methodology for Chinshan nuclear power plant spent fuel pool under Fukushima-like accident condition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Hao-Tzu [Institute of Nuclear Energy Research, Taoyuan, Taiwan (China). Research Atomic Energy Council; Li, Wan-Yun; Wang, Jong-Rong; Tseng, Yung-Shin; Chen, Hsiung-Chih; Shih, Chunkuan; Chen, Shao-Wen [National Tsing Hua Univ., HsinChu, Taiwan (China). Inst. of Nuclear Engineering and Science

    2017-03-15

    Chinshan nuclear power plant (NPP), a BWR/4 plant, is the first NPP in Taiwan. After Fukushima NPP disaster occurred, there is more concern for the safety of NPPs in Taiwan. Therefore, in order to estimate the safety of Chinshan NPP spent fuel pool (SFP), by using TRACE, MELCOR, CFD, and FRAPTRAN codes, INER (Institute of Nuclear Energy Research, Atomic Energy Council, R.O.C.) performed the safety analysis of Chinshan NPP SFP. There were two main steps in this research. The first step was the establishment of Chinshan NPP SFP models. And the transient analysis under the SFP cooling system failure condition (Fukushima-like accident) was performed. In addition, the sensitive study of the time point for water spray was also performed. The next step was the fuel rod performance analysis by using FRAPTRAN and TRACE's results. Finally, the animation model of Chinshan NPP SFP was presented by using the animation function of SNAP with MELCOR analysis results.

  9. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    Science.gov (United States)

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  10. Measurement of environmental impacts of telework adoption amidst change in complex organizations. AT and T survey methodology and results

    Energy Technology Data Exchange (ETDEWEB)

    Atkyns, Robert; Blazek, Michele; Roitz, Joseph [AT and T, 179 Bothin Road, 94930 Fairfax, CA (United States)

    2002-10-01

    Telecommuting practices and their environmental and organizational performance impacts have stimulated research across academic disciplines. Although telecommuting trends and impact projections are reported, few true longitudinal studies involving large organizations have been conducted. Published studies typically lack the research design elements to control a major confounding variable: rapid and widespread organizational change. Yet social science 'Best Practices' and market research industry quality control procedures exist that can help manage organizational change effects and other common sources of measurement error. In 1992, AT and T established a formal, corporate-wide telecommuting policy. A research and statistical modeling initiative was implemented to measure how flexible work arrangements reduce automotive emissions. Annual employee surveys were begun in 1994. As telecommuting benefits have been increasingly recognized within AT and T, the essential construct has been redefined as 'telework.' The survey's scope has expanded to address broader organization issues and provide guidance to multiple internal constituencies. This paper focuses upon the procedures used to reliably measure the adoption of telework practices and model their environmental impact, and contrasts those procedures with other, less reliable methodologies.

  11. The equipment for low radioactivity measurements in industrial and field conditions

    International Nuclear Information System (INIS)

    Malik, R.; Owczarczyk, A.; Szpilowski, S.; Zenczykiewicz, Z.

    1992-01-01

    The equipment for low radioactivity measurements in industrial and field conditions has been worked out. Three scintillation detectors applied work in coincidence system. Their scintillation crystals are divided one to another by lead shieldings. All measuring system is situated in a lead container with lead cover. The measuring vessel fills practically all free volume of the lead container. Their shape ensures the best possible measurement geometry. (author). 3 figs

  12. Effects of drop size and measuring condition on static contact angle measurement on a superhydrophobic surface with goniometric technique

    International Nuclear Information System (INIS)

    Seo, Kwangseok; Kim, Minyoung; Kim, Do Hyun; Ahn, Jeong Keun

    2015-01-01

    It is not a simple task to measure a contact angle of a water drop on a superhydrophobic surface with sessile drop method, because a roll-off angle is very low. Usually contact angle of a water drop on a superhydrophobic surface is measured by fixing a drop with intentional defects on the surface or a needle. We examined the effects of drop size and measuring condition such as the use of a needle or defects on the static contact angle measurement on superhydrophobic surface. Results showed that the contact angles on a superhydrophobic surface remain almost constant within intrinsic measurement errors unless there is a wetting transition during the measurement. We expect that this study will provide a deeper understanding on the nature of the contact angle and convenient measurement of the contact angle on the superhydrophobic surface.

  13. Influence of wind conditions on wind turbine loads and measurement of turbulence using lidars

    NARCIS (Netherlands)

    Sathe, A.R.

    2012-01-01

    Variations in wind conditions influence the loads on wind turbines significantly. In order to determine these loads it is important that the external conditions are well understood. Wind lidars are well developed nowadays to measure wind profiles upwards from the surface. But how turbulence can be

  14. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia.

    Science.gov (United States)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  15. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia

    Science.gov (United States)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  16. Optimizing the conditions for the microwave-assisted direct liquefaction of Ulva prolifera for bio-oil production using response surface methodology

    International Nuclear Information System (INIS)

    Liu, Junhai; Zhuang, Yingbin; Li, Yan; Chen, Limei; Guo, Jingxue; Li, Demao; Ye, Naihao

    2013-01-01

    Microwave-assisted direct liquefaction (MADL) of Ulva prolifera was performed in ethylene glycol (EG) using sulfuric acid (H 2 SO 4 ) as a catalyst. Response Surface Methodology (RSM) based on central composite rotatable design (CCRD) was employed to optimize the conditions of three independent variables (catalyst content, solvent-to-feedstock ratio and temperature) for the liquefaction yield. And the bio-oil was analyzed by elementary analysis, Fourier transform infrared spectroscopic analysis (FT-IR) and gas chromatography–mass spectrometry (GC–MS). The maximum liquefaction yield was 93.17%, which was obtained under a microwave power of 600 W for 30 min at 165 °C with a solvent-to-feedstock ratio of 18.87:1 and 4.93% sulfuric acid. The bio-oil was mainly composed of phthalic acid esters, alkenes and a fatty acid methyl ester with a long chain from C 16 to C 20 . - Highlights: • Ulva prolifera was converted to bio-oil through microwave-assisted direct liquefaction. • Response surface methodology was used to optimize the liquefaction technology. • A maximum liquefaction rate of 93.17 wt% bio-oil was obtained. • The bio-oil was composed of carboxylic acids and esters

  17. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    International Nuclear Information System (INIS)

    Jahanshahi, Mohammad R; Masri, Sami F

    2013-01-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one. (paper)

  18. Organic and total mercury determination in sediments by cold vapor atomic absorption spectrometry: methodology validation and uncertainty measurements

    Directory of Open Access Journals (Sweden)

    Robson L. Franklin

    2012-01-01

    Full Text Available The purpose of the present study was to validate a method for organic Hg determination in sediment. The procedure for organic Hg was adapted from literature, where the organomercurial compounds were extracted with dichloromethane in acid medium and subsequent destruction of organic compounds by bromine chloride. Total Hg was performed according to 3051A USEPA methodology. Mercury quantification for both methodologies was then performed by CVAAS. Methodology validation was verified by analyzing certified reference materials for total Hg and methylmercury. The uncertainties for both methodologies were calculated. The quantification limit of 3.3 µg kg-1 was found for organic Hg by CVAAS.

  19. The Six-Minute Walk Test in Chronic Pediatric Conditions: A Systematic Review of Measurement Properties

    NARCIS (Netherlands)

    Bart Bartels; Janke de Groot; Caroline Terwee

    2013-01-01

    Background The Six-Minute Walk Test (6MWT) is increasingly being used as a functional outcome measure for chronic pediatric conditions. Knowledge about its measurement properties is needed to determine whether it is an appropriate test to use. Purpose The purpose of this study was to systematically

  20. The six-minute walk test in chronic pediatric conditions: a systematic review of measurement properties.

    NARCIS (Netherlands)

    Bartels, B.; Groot, J.F. de; Terwee, C.B.

    2013-01-01

    Background: The Six-Minute Walk Test (6MWT) is increasingly being used as a functional outcome measure for chronic pediatric conditions. Knowledge about its measurement properties is needed to determine whether it is an appropriate test to use. Purpose: The purpose of this study was to

  1. Vector-valued measure and the necessary conditions for the optimal control problems of linear systems

    International Nuclear Information System (INIS)

    Xunjing, L.

    1981-12-01

    The vector-valued measure defined by the well-posed linear boundary value problems is discussed. The maximum principle of the optimal control problem with non-convex constraint is proved by using the vector-valued measure. Especially, the necessary conditions of the optimal control of elliptic systems is derived without the convexity of the control domain and the cost function. (author)

  2. Development and validation of method for heterocyclic compounds in wine: optimization of HS-SPME conditions applying a response surface methodology.

    Science.gov (United States)

    Burin, Vívian Maria; Marchand, Stéphanie; de Revel, Gilles; Bordignon-Luiz, Marilde T

    2013-12-15

    Considering the importance of the heterocyclic compounds in terms of wine flavor, this study aims to propose a new rapid and solvent free method to quantify different classes of heterocyclic compounds, such as furans, thiophenes, thiazols and pyrazines, which are products of the Maillard reaction, in wines. The use of a central composite design and the response surface methodology to determine the best conditions allows the optimum combination of analytical variables (pH, NaCl and extraction time) to be identified. The validation was carried out using several types of wine as matrices. The method shows satisfactory repeatability (2.7%heterocyclic compounds were determined, mainly for red wines. © 2013 Elsevier B.V. All rights reserved.

  3. Investigation of optimal conditions for production of highly crystalline nanocellulose with increased yield via novel Cr(III)-catalyzed hydrolysis: Response surface methodology.

    Science.gov (United States)

    Chen, You Wei; Lee, Hwei Voon; Abd Hamid, Sharifah Bee

    2017-12-15

    For the first time, a highly efficient Cr(NO 3 ) 3 catalysis system was proposed for optimization the yield and crystallinity of nanocellulose end product. A five-level three-factor central composite design coupled with response surface methodology was employed to elucidate parameters interactions between three design factors, namely reaction temperature (x 1 ), reaction time (x 2 ) and concentration of Cr(NO 3 ) 3 (x 3 ) over a broad range of process conditions and determine the effect on crystallinity index and product yield. The developed models predicted the maximum nanocellulose yield of 87% at optimum process conditions of 70.6°C, 1.48h, and 0.48M Cr(NO 3 ) 3 . At these conditions, the obtained nanocellulose presented high crystallinity index (75.3%), spider-web-like interconnected network morphology with the average width of 31.2±14.3nm. In addition, the yielded nanocellulose rendered a higher thermal stability than that of original cellulosic source and expected to be widely used as reinforcement agent in bio-nanocomposites materials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Analysis of fuel rod behaviour within a rod bundle of a pressurized water reactor under the conditions of a loss of coolant accident (LOCA) using probabilistic methodology

    International Nuclear Information System (INIS)

    Sengpiel, W.

    1980-12-01

    The assessment of fuel rod behaviour under PWR LOCA conditions aims at the evaluation of the peak cladding temperatures and the (final) maximum circumferential cladding strains. Moreover, the estimation of the amount of possible coolant channel blockages within a rod bundle is of special interest, as large coplanar clad strains of adjacent rods may result in strong local reductions of coolant channel areas. Coolant channel blockages of large radial extent may impair the long-term coolability of the corresponding rods. A model has been developed to describe these accident consequences using probabilistic methodology. This model is applied to study the behaviour of fuel rods under accident conditions following the double-ended pipe rupture between collant pump and pressure vessel in the primary system of a 1300 MW(el)-PWR. Specifically a rod bundle is considered consisting of 236 fuel rods, that is subjected to severe thermal and mechanical loading. The results obtained indicate that plastic clad deformations with circumferential clad strains of more than 30% cannot be excluded for hot rods of the reference bundle. However, coplanar coolant channel blockages of significant extent seem to be probable within that bundle only under certain boundary conditions which are assumed to be pessimistic. (orig./RW) [de

  5. Evaluation of optimum conditions for pachyman encapsulated in poly(D,L-lactic acid nanospheres by response surface methodology and results of a related in vitro study

    Directory of Open Access Journals (Sweden)

    Zheng S

    2016-09-01

    Full Text Available Sisi Zheng, Li Luo, Ruonan Bo, Zhenguang Liu, Jie Xing, Yale Niu, Yuanliang Hu, Jiaguo Liu, Deyun Wang Institute of Traditional Chinese Veterinary Medicine, College of Veterinary Medicine, Nanjing Agricultural University, Nanjing, People’s Republic of China Abstract: This study aimed to optimize the preparation conditions of pachyman (PHY-loaded poly(D,L-lactic acid (PLA (PHYP nanospheres by response surface methodology, explore their characteristics, and assess their effects on splenic lymphocytes. Double emulsion solvent evaporation was used to synthesize PHYP nanospheres, and the optimal preparation conditions were identified as a concentration of poloxamer 188 (F68 (w/v of 0.33%, a concentration of PLA of 30 mg/mL, and a ratio of PLA to drug (w/w of 10.25:1 required to reach the highest encapsulation efficiency, which was calculated to be 59.10%. PHYP had a spherical shape with a smooth surface and uniform size and an evident effect of sustained release and relative stability. Splenic lymphocytes are crucial and multifunctional cells in the immune system, and their immunological properties could be enhanced significantly by PHYP treatment. This study confirmed that PHY encapsulated in PLA nanospheres had comparatively steady properties and exerted obvious immune enhancement. Keywords: PHYP, optimal preparation condition, RSM, in vitro study

  6. Using a model of the performance measures in Soft Systems Methodology (SSM) to take action: a case study in health care

    NARCIS (Netherlands)

    Kotiadis, K.; Tako, A.; Rouwette, E.A.J.A.; Vasilakis, C.; Brennan, J.; Gandhi, P.; Wegstapel, H.; Sagias, F.; Webb, P.

    2013-01-01

    This paper uses a case study of a multidisciplinary colorectal cancer team in health care to explain how a model of performance measures can lead to debate and action in Soft System Methodology (SSM). This study gives a greater emphasis and role to the performance measures than currently given in

  7. Evaluation of body condition score measured throughout lactation as an indicator of fertility in dairy cattle

    OpenAIRE

    Banos, G; Brotherstone, S; Coffey, MP

    2004-01-01

    Body condition score (BCS) records of primiparous Holstein cows were analyzed both as a single measure per animal and as repeated measures per sire of cow. The former resulted in a single, average, genetic evaluation for each sire, and the latter resulted in separate genetic evaluations per day of lactation. Repeated measure analysis yielded genetic correlations of less than unity between days of lactation, suggesting that BCS may not be the same trait across lactation. Differences between da...

  8. Aircraft and ground vehicle friction measurements obtained under winter runway conditions

    Science.gov (United States)

    Yager, Thomas J.

    1989-01-01

    Tests with specially instrumented NASA B-737 and B-727 aircraft together with several different ground friction measuring devices have been conducted for a variety of runway surface types and wetness conditions. This effort is part of the Joint FAA/NASA Aircraft/Ground Vehicle Runway Friction Program aimed at obtaining a better understanding of aircraft ground handling performance under adverse weather conditions, and defining relationships between aircraft and ground vehicle tire friction measurements. Aircraft braking performance on dry, wet, snow-, and ice-covered runway conditions is discussed together with ground vehicle friction data obtained under similar runway conditions. For the wet, compacted snow- and ice-covered runway conditions, the relationship between ground vehicles and aircraft friction data is identified. The influence of major test parameters on friction measurements such as speed, test tire characteristics, and surface contaminant-type are discussed. The test results indicate that use of properly maintained and calibrated ground vehicles for monitoring runway friction conditions should be encouraged particularly under adverse weather conditions.

  9. Characterization of the emissions impacts of hybrid excavators with a portable emissions measurement system (PEMS)-based methodology.

    Science.gov (United States)

    Cao, Tanfeng; Russell, Robert L; Durbin, Thomas D; Cocker, David R; Burnette, Andrew; Calavita, Joseph; Maldonado, Hector; Johnson, Kent C

    2018-04-13

    Hybrid engine technology is a potentially important strategy for reduction of tailpipe greenhouse gas (GHG) emissions and other pollutants that is now being implemented for off-road construction equipment. The goal of this study was to evaluate the emissions and fuel consumption impacts of electric-hybrid excavators using a Portable Emissions Measurement System (PEMS)-based methodology. In this study, three hybrid and four conventional excavators were studied for both real world activity patterns and tailpipe emissions. Activity data was obtained using engine control module (ECM) and global positioning system (GPS) logged data, coupled with interviews, historical records, and video. This activity data was used to develop a test cycle with seven modes representing different types of excavator work. Emissions data were collected over this test cycle using a PEMS. The results indicated the HB215 hybrid excavator provided a significant reduction in tailpipe carbon dioxide (CO 2 ) emissions (from -13 to -26%), but increased diesel particulate matter (PM) (+26 to +27%) when compared to a similar model conventional excavator over the same duty cycle. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Validation of the PROMIS® measures of self-efficacy for managing chronic conditions.

    Science.gov (United States)

    Gruber-Baldini, Ann L; Velozo, Craig; Romero, Sergio; Shulman, Lisa M

    2017-07-01

    The Patient-Reported Outcomes Measurement Information System ® (PROMIS ® ) was designed to develop, validate, and standardize item banks to measure key domains of physical, mental, and social health in chronic conditions. This paper reports the calibration and validation testing of the PROMIS Self-Efficacy for Managing Chronic Conditions measures. PROMIS Self-Efficacy for Managing Chronic Conditions item banks comprise five domains, Self-Efficacy for Managing: Daily Activities, Symptoms, Medications and Treatments, Emotions, and Social Interactions. Banks were calibrated in 1087 subjects from two data sources: 837 patients with chronic neurologic conditions (epilepsy, multiple sclerosis, neuropathy, Parkinson disease, and stroke) and 250 subjects from an online Internet sample of adults with general chronic conditions. Scores were compared with one legacy scale: Self-Efficacy for Managing Chronic Disease 6-Item scale (SEMCD6) and five PROMIS short forms: Global Health (Physical and Mental), Physical Function, Fatigue, Depression, and Anxiety. The sample was 57% female, mean age = 53.8 (SD = 14.7), 76% white, 21% African American, 6% Hispanic, and 76% with greater than high school education. Full-item banks were created for each domain. All measures had good internal consistency and correlated well with SEMCD6 (r  = 0.56-0.75). Significant correlations were seen between the Self-Efficacy measures and other PROMIS short forms (r  > 0.38). The newly developed PROMIS Self-Efficacy for Managing Chronic Conditions measures include five domains of self-efficacy that were calibrated across diverse chronic conditions and show good internal consistency and cross-sectional validity.

  11. The Role of Condition-Specific Preference-Based Measures in Health Technology Assessment.

    Science.gov (United States)

    Rowen, Donna; Brazier, John; Ara, Roberta; Azzabi Zouraq, Ismail

    2017-12-01

    A condition-specific preference-based measure (CSPBM) is a measure of health-related quality of life (HRQOL) that is specific to a certain condition or disease and that can be used to obtain the quality adjustment weight of the quality-adjusted life-year (QALY) for use in economic models. This article provides an overview of the role and the development of CSPBMs, and presents a description of existing CSPBMs in the literature. The article also provides an overview of the psychometric properties of CSPBMs in comparison with generic preference-based measures (generic PBMs), and considers the advantages and disadvantages of CSPBMs in comparison with generic PBMs. CSPBMs typically include dimensions that are important for that condition but may not be important across all patient groups. There are a large number of CSPBMs across a wide range of conditions, and these vary from covering a wide range of dimensions to more symptomatic or uni-dimensional measures. Psychometric evidence is limited but suggests that CSPBMs offer an advantage in more accurate measurement of milder health states. The mean change and standard deviation can differ for CSPBMs and generic PBMs, and this may impact on incremental cost-effectiveness ratios. CSPBMs have a useful role in HTA where a generic PBM is not appropriate, sensitive or responsive. However, due to issues of comparability across different patient groups and interventions, their usage in health technology assessment is often limited to conditions where it is inappropriate to use a generic PBM or sensitivity analyses.

  12. In situ and laboratory measurements of very low permeability in the Tournemine argilites (Aveyron). Comparison of methodologies and scale effect

    International Nuclear Information System (INIS)

    Boisson, J.Y.; Cabrera, J.

    1998-01-01

    At the request of the Institut de Protection et de Surete Nucleaire (IPSN - Institute of Nuclear Safety and Protection), ANTEA visited the Tournemire site (Aveyron) to carry out an hydraulic characterization of the 200 m-thick Toarcian and Domerian formations accessible by tunnel. Permeability measurements were made using the borehole pulse-test method either in the global hole or perpendicular to more permeable fractured zones. The tests yielded an approximate value for the hydraulic head and an order of magnitude for the permeability at 1 to 10 metre scale (10 -11 to 10 -13 m/s). A borehole was then equipped for a long-duration (6 months) measurement of the hydraulic head in the rock body. Laboratory measurements were made on 4 cm-diameter core samples taken from different boreholes. The tests, carried out under triaxial stress, required preliminary saturation-consolidation of the test samples. Through applying steady-state flow or hydraulic pulse, it was possible to measure a permeability in order of 10 -14 m/s for the matrix of the clayey material. The difference between laboratory and in situ values is explained by the presence of fractures in the rock body. Moreover, it seems that the hydraulic conditions of measurement in the field around the hole could have an influence on the final result. (authors)

  13. Identification of voltage stability condition of a power system using measurements of bus variables

    Directory of Open Access Journals (Sweden)

    Durlav Hazarika

    2014-12-01

    Full Text Available Several online methods were proposed for investigating the voltage stability condition of an interconnected power system using the measurements of voltage and current phasors at a bus. For this purpose, phasor measurement units (PMUs are used. A PMU is a device which measures the electrical waves on an electrical network, using a common time source (reference bus for synchronisation. This study proposes a method for online monitoring of voltage stability condition of a power system using measurements of bus variables namely – (i real power, (ii reactive power and (iii bus voltage magnitude at a bus. The measurements of real power, reactive power and bus voltage magnitude could be extracted/captured from a smart energy meter. The financial involvement for implementation of the proposed method would significantly lower compared with the PMU-based method.

  14. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health.

    Science.gov (United States)

    Aymerich, Marta; Carrion, Carme; Gallo, Pedro; Garcia, Maria; López-Bermejo, Abel; Quesada, Miquel; Ramos, Rafel

    2012-08-01

    Most ex-post evaluations of research funding programs are based on bibliometric methods and, although this approach has been widely used, it only examines one facet of the project's impact, that is, scientific productivity. More comprehensive models of payback assessment of research activities are designed for large-scale projects with extensive funding. The purpose of this study was to design and implement a methodology for the ex-post evaluation of small-scale projects that would take into account both the fulfillment of projects' stated objectives as well as other wider benefits to society as payback measures. We used a two-phase ex-post approach to appraise impact for 173 small-scale projects funded in 2007 and 2008 by a Spanish network center for research in epidemiology and public health. In the internal phase we used a questionnaire to query the principal investigator (PI) on the outcomes as well as actual and potential impact of each project; in the external phase we sent a second questionnaire to external reviewers with the aim of assessing (by peer-review) the performance of each individual project. Overall, 43% of the projects were rated as having completed their objectives "totally", and 40% "considerably". The research activities funded were reported by PIs as socially beneficial their greatest impact being on research capacity (50% of payback to society) and on knowledge translation (above 11%). The method proposed showed a good discriminating ability that makes it possible to measure, reliably, the extent to which a project's objectives were met as well as the degree to which the project contributed to enhance the group's scientific performance and of its social payback. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Towards Uniform Accelerometry Analysis: A Standardization Methodology to Minimize Measurement Bias Due to Systematic Accelerometer Wear-Time Variation

    Directory of Open Access Journals (Sweden)

    Tarun R. Katapally, Nazeem Muhajarine

    2014-06-01

    Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and

  16. Optimization of process condition for the preparation of amine-impregnated activated carbon developed for CO2 capture and applied to methylene blue adsorption by response surface methodology.

    Science.gov (United States)

    Das, Dipa; Meikap, Bhim C

    2017-10-15

    The present research describes the optimal adsorption condition for methylene blue (MB). The adsorbent used here was monoethanol amine-impregnated activated carbon (MEA-AC) prepared from green coconut shell. Response surface methodology (RSM) is the multivariate statistical technique used for the optimization of the process variables. The central composite design is used to determine the effect of activation temperature, activation time and impregnation ratio on the MB removal. The percentage (%) MB adsorption by MEA-AC is evaluated as a response of the system. A quadratic model was developed for response. From the analysis of variance, the factor which was the most influential on the experimental design response has been identified. The optimum condition for the preparation of MEA-AC from green coconut shells is the temperature of activation 545.6°C, activation time of 41.64 min and impregnation ratio of 0.33 to achieve the maximum removal efficiency of 98.21%. At the same optimum parameter, the % MB removal from the textile-effluent industry was examined and found to be 96.44%.

  17. Measurement properties of tools used to assess depression in adults with and without autism spectrum conditions: A systematic review.

    Science.gov (United States)

    Cassidy, S A; Bradley, L; Bowen, E; Wigham, S; Rodgers, J

    2018-01-23

    Depression is the most commonly experienced mental health condition in adults with autism spectrum conditions (ASC). However, it is unclear what tools are currently being used to assess depression in ASC, or whether tools need to be adapted for this group. This systematic review therefore aimed to identify tools used to assess depression in adults with and without ASC, and then evaluate these tools for their appropriateness and measurement properties. Medline, PsychINFO and Web of Knowledge were searched for studies of depression in: (a) adults with ASC, without co-morbid intellectual disability; and (b) adults from the general population without co-morbid conditions. Articles examining the measurement properties of these tools were then searched for using a methodological filter in PubMed, and the quality of the evidence was evaluated using the COSMIN checklist. Twelve articles were identified which utilized three tools to assess depression in adults with ASC, but only one article which assessed the measurement properties of one of these tools was identified and thus evaluated. Sixty-four articles were identified which utilized five tools to assess depression in general population adults, and fourteen articles had assessed the measurement properties of these tools. Overall, two tools were found to be robust in their measurement properties in the general population-the Beck Depression Inventory (BDI-II), and the patient health questionnaire (PHQ-9). Crucially only one study was identified from the COSMIN search, which showed weak evidence in support of the measurement properties of the BDI-II in an ASC sample. Implications for effective measurement of depression in ASC are discussed. Autism Res 2018. © 2018 The Authors Autism Research published by International Society for Autism Research and Wiley Periodicals, Inc. Depression is the most common mental health problem experienced by adults with autism. However, the current study found very limited evidence

  18. The Metal-Halide Lamp Under Varying Gravity Conditions Measured by Emission and Laser Absorption Spectroscopy

    Science.gov (United States)

    Flikweert, A. J.; Nimalasuriya, T.; Kroesen, G. M. W.; Haverlag, M.; Stoffels, W. W.

    2009-11-01

    Diffusive and convective processes in the metal-halide lamp cause an unwanted axial colour segregation. Convection is induced by gravity. To understand the flow phenomena in the arc discharge lamp it has been investigated under normal laboratory conditions, micro-gravity (ISS and parabolic flights) and hyper-gravity (parabolic flights 2 g, centrifuge 1 g-10 g). The measurement techniques are webcam imaging, and emission and laser absorption spectroscopy. This paper aims to give an overview of the effect of different artificial gravity conditions on the lamp and compares the results from the three measurement techniques.

  19. Identification of complex model thermal boundary conditions based on exterior temperature measurement

    International Nuclear Information System (INIS)

    Lu Jianming; Ouyang Guangyao; Zhang Ping; Rong Bojun

    2012-01-01

    Combining the advantages of the finite element software in temperature field analyzing with the multivariate function optimization arithmetic, a feasibility method based on the exterior temperature was proposed to get the thermal boundary conditions, which was required in temperature field analyzing. The thermal boundary conditions can be obtained only by some temperature measurement values. Taking the identification of the convection heat transfer coefficient of a high power density diesel engine cylinder head as an example, the calculation result shows that when the temperature measurement error was less than 0.5℃, the maximum relative error was less than 2%. It is shown that the new method was feasible (authors)

  20. Reconstruction of photon number conditioned states using phase randomized homodyne measurements

    International Nuclear Information System (INIS)

    Chrzanowski, H M; Assad, S M; Bernu, J; Hage, B; Lam, P K; Symul, T; Lund, A P; Ralph, T C

    2013-01-01

    We experimentally demonstrate the reconstruction of a photon number conditioned state without using a photon number discriminating detector. By using only phase randomized homodyne measurements, we reconstruct up to the three photon subtracted squeezed vacuum state. The reconstructed Wigner functions of these states show regions of pronounced negativity, signifying the non-classical nature of the reconstructed states. The techniques presented allow for complete characterization of the role of a conditional measurement on an ensemble of states, and might prove useful in systems where photon counting still proves technically challenging. (paper)

  1. Optimisation of the measurement protocols of 129I and 129I/127I. Methodology establishment for the measurement in environmental matrices

    International Nuclear Information System (INIS)

    Frechou, C.

    2000-01-01

    129 I, is a natural long-lived isotope, with a half-life of 15,7 million years, also artificially produced in nuclear power plant. It is then released in the liquid and gaseous effluents of the nuclear fuel reprocessing plants. 129 I is integrated in all biological compartments at different activity levels, depending on their distance from the emission source and their ability to metabolize iodine. Performances of the different 129 I and 129 I/ 127 I measurement techniques available: Radiochemical Neutron Activation Analysis, Accelerator Mass Spectrometry, direct γ-X spectrometry and liquid scintillation were evaluated. Associated radiochemical preparation steps of the two first techniques were optimized and adapted to the characteristics of the major environmental matrices. In a first step, the radiochemical protocols were developed and validated. In a second step, intercomparison exercises have been lead on various environmental samples presenting different 129 I activity levels. They showed the good agreement between the results given by the three techniques on different environmental matrices with activities between 0,2 and 200 Bq.kg -1 dry weight. As a conclusion, a methodology for the measurement of 129 I and 129 I/ 127 I ratio in environmental samples is proposed. It includes a decisional diagram taking into account the characteristics of the matrices, the detection limits and the answer delay. A study on the losses of 129 I during the calcination of an algae was lead by direct γ-X spectrometry and application studies were made to measure 129 I levels in different biological compartments issued from various locations: 129 I activity interspecific variation in different species of seaweeds from the French channel coast under the relative influence of La Hague, 129 I levels in bovine thyroids from the Cotentin area and 129 I in vegetal samples collected around the nuclear reprocessing plant of Marcoule. (author)

  2. Effect of Complex Working Conditions on Nurses Who Exert Coercive Measures in Forensic Psychiatric Care.

    Science.gov (United States)

    Gustafsson, Niclas; Salzmann-Erikson, Martin

    2016-09-01

    Nurses who exert coercive measures on patients within psychiatric care are emotionally affected. However, research on their working conditions and environment is limited. The purpose of the current study was to describe nurses' experiences and thoughts concerning the exertion of coercive measures in forensic psychiatric care. The investigation was a qualitative interview study using unstructured interviews; data were analyzed with inductive content analysis. Results described participants' thoughts and experiences of coercive measures from four main categories: (a) acting against the patients' will, (b) reasoning about ethical justifications, (c) feelings of compassion, and (d) the need for debriefing. The current study illuminates the working conditions of nurses who exert coercive measures in clinical practice with patients who have a long-term relationship with severe symptomatology. The findings are important to further discuss how nurses and leaders can promote a healthier working environment. [Journal of Psychosocial Nursing and Mental Health Services, 54(9), 37-43.]. Copyright 2016, SLACK Incorporated.

  3. The Efficiency of Repressive Anti-Corruption Measures in Conditions of High-Level Corruption

    OpenAIRE

    Abramov Fedir V.

    2017-01-01

    The article is aimed at determining the efficiency of repressive anti-corruption measures in conditions of high-level corruption. It is shown that the formal rules regulating the use of repressive methods of countering corruption are characterized by a significant level of the target inefficiency of formal rules. Resulting from ignorance as to the causes of both occurence and spread of corruption – the inefficiency of the current formal rules – repressive anti-corruption measures are fundamen...

  4. BER-3.2 report: Methodology for justification and optimization of protective measures including a case study

    International Nuclear Information System (INIS)

    Hedemann Jensen, P.; Sinkko, K.; Walmod-Larsen, O.; Gjoerup, H.L.; Salo, A.

    1992-07-01

    This report is a part of the Nordic BER-3 project's work to propose and harmonize Nordic intervention levels for countermeasures in case of nuclear accidents. This report focuses on the methodology for justification and optimization of protective measures in case of a reactor accident situation with a large release of fission products to the environment. The down-wind situation is very complicated. The dose to the exposed society is almost unpredictable. The task of the radiation protection experts: To give advice to the decision makers on averted doses by the different actions at hand in the situation - is complicated. That of the decision makers is certainly more: On half of the society they represent, they must decide if they wish to follow the advices from their radiation protection experts or if they wish to add further arguments - economical or political (or personal) - into their considerations before their decisions are taken. Two analysis methods available for handling such situations: cost-benefit analysis and multi-attribute utility analysis are described in principle and are utilized in a case study: The impacts of a Chernobyl-like accident on the Swedish island of Gotland in the Baltic Sea are analyzed with regard to the acute consequences. The use of the intervention principles found in international guidance (IAEA 91, ICRP 91), which can be summarized as the principles of justification, optimization and avoidance of unacceptable doses, are described. How to handle more intangible factors of a psychological or political character is indicated. (au) (6 tabs., 3 ills., 17 refs.)

  5. Experimental measurements of the solubility of technetium under near-field conditions

    International Nuclear Information System (INIS)

    Pilkington, N.J.; Wilkins, J.D.

    1988-05-01

    The solubility of technetium in contact with hydrated technetium dioxide under near-field conditions has been measured experimentally. The values obtained were changed little by a change in pH or in the filtration method used. The presence of organic degradation products increased slightly the solution concentration of technetium. (author)

  6. Measuring the iron spectral opacity in solar conditions using a double ablation front scheme

    Energy Technology Data Exchange (ETDEWEB)

    Colaitis, A. [Centre Lasers Intenses et Applications, Talence (France); CEA/DRF/IRFU/DAp, CEA Saclay (France); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of Rochester, NY (United States). Lab. for Laser Energetics; Ducret, J. E. [Centre Lasers Intenses et Applications, Talence (France); CEA/DRF/IRFU/DAp, CEA Saclay (France); Turck-Chieze, S [CEA/DRF/IRFU/DAp, CEA Saclay (France); Pennec, M L [CEA/DRF/IRFU/DAp, CEA Saclay (France); CEA/DIF, Arpajon (France); Blancard, C [CEA/DIF, Arpajon (France)

    2018-01-22

    We propose a new method to achieve hydrodynamic conditions relevant for the investigation of the radiation transport properties of the plasma at the base of the solar convection zone. The method is designed in the framework of opacity measurements with high-power lasers and exploits the temporal and spatial stability of hydrodynamic parameters in counter-propagating Double Ablation Front (DAF) structures.

  7. Vegetation relevés and soil measurements in the Netherlands: the Ecological Conditions Database (EC)

    NARCIS (Netherlands)

    Wamelink, G.W.W.; Adrichem, van M.H.C.; Dobben, van H.F.; Frissel, J.Y.; Held, den M.E.; Joosten, V.; Malinowska, A.H.; Slim, P.A.; Wegman, R.M.A.

    2012-01-01

    Since its establishment around 1990, the Ecological Conditions Database (EC; GIVD ID EU-00-006) has been accumulating vegetation relevés from the Netherlands, each accompanied by at least one abiotic soil measurement (e.g. pH or nutrient availability). On 1-1-2010, the database contained 8,229

  8. The measurement of interplanetary scintillations in conditions of strong radio interference

    International Nuclear Information System (INIS)

    Duffett-Smith, P.J.

    1980-01-01

    Observations of interplanetary scintillations (IPS) are often severely limited by interference from man-made transmissions within the receiver pass-band. A new method of measuring IPS is described which can give useful data even in conditions of bad interference. (author)

  9. Analysis of the environmental conditions at Gale Crater from MSL/REMS measurements

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, G.; Torre-Juarez, M. de la; Vicente-Retortillo, A.; Kemppinen, O.; Renno, N.; Lemmon, M.

    2016-07-01

    The environmental conditions at Gale Crater during the first 1160 sols of the Mars Science Laboratory (MSL) mission are assessed using measurements taken by the Rover Environmental Monitoring Station (REMS) on-board the MSL Curiosity rover. REMS is a suite of sensors developed to assess the environmental conditions along the rover traverse. In particular, REMS has been measuring atmospheric pressure, atmospheric and ground temperature, relative humidity, UV radiation flux and wind speed. Here we analyze processed data with the highest confidence possible of atmospheric pressure, atmospheric and ground temperature and relative humidity. In addition, we estimate the daily UV irradiation at the surface of Gale Crater using dust opacity values derived from the Mastcam instrument. REMS is still in operation, but it has already provided the most comprehensive coverage of surface environmental conditions recorded by a spacecraft landed on Mars. (Author)

  10. Measurement of Survival Time in Brachionus Rotifers: Synchronization of Maternal Conditions.

    Science.gov (United States)

    Kaneko, Gen; Yoshinaga, Tatsuki; Gribble, Kristin E; Welch, David M; Ushio, Hideki

    2016-07-22

    Rotifers are microscopic cosmopolitan zooplankton used as models in ecotoxicological and aging studies due to their several advantages such as short lifespan, ease of culture, and parthenogenesis that enables clonal culture. However, caution is required when measuring their survival time as it is affected by maternal age and maternal feeding conditions. Here we provide a protocol for powerful and reproducible measurement of the survival time in Brachionus rotifers following a careful synchronization of culture conditions over several generations. Empirically, poor synchronization results in early mortality and a gradual decrease in survival rate, thus resulting in weak statistical power. Indeed, under such conditions, calorie restriction (CR) failed to significantly extend the lifespan of B. plicatilis although CR-induced longevity has been demonstrated with well-synchronized rotifer samples in past and present studies. This protocol is probably useful for other invertebrate models, including the fruitfly Drosophila melanogaster and the nematode Caenorhabditis elegans, because maternal age effects have also been reported in these species.

  11. Concurrent measurement of "real-world" stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology.

    Science.gov (United States)

    Kimhy, David; Delespaul, Philippe; Ahn, Hongshik; Cai, Shengnan; Shikhman, Marina; Lieberman, Jeffrey A; Malaspina, Dolores; Sloan, Richard P

    2010-11-01

    Psychosis has been repeatedly suggested to be affected by increases in stress and arousal. However, there is a dearth of evidence supporting the temporal link between stress, arousal, and psychosis during "real-world" functioning. This paucity of evidence may stem from limitations of current research methodologies. Our aim is to the test the feasibility and validity of a novel methodology designed to measure concurrent stress and arousal in individuals with psychosis during "real-world" daily functioning. Twenty patients with psychosis completed a 36-hour ambulatory assessment of stress and arousal. We used experience sampling method with palm computers to assess stress (10 times per day, 10 AM → 10 PM) along with concurrent ambulatory measurement of cardiac autonomic regulation using a Holter monitor. The clocks of the palm computer and Holter monitor were synchronized, allowing the temporal linking of the stress and arousal data. We used power spectral analysis to determine the parasympathetic contributions to autonomic regulation and sympathovagal balance during 5 minutes before and after each experience sample. Patients completed 79% of the experience samples (75% with a valid concurrent arousal data). Momentary increases in stress had inverse correlation with concurrent parasympathetic activity (ρ = -.27, P feasibility and validity of our methodology in individuals with psychosis. The methodology offers a novel way to study in high time resolution the concurrent, "real-world" interactions between stress, arousal, and psychosis. The authors discuss the methodology's potential applications and future research directions.

  12. Quantitative approach for optimizing e-beam condition of photoresist inspection and measurement

    Science.gov (United States)

    Lin, Chia-Jen; Teng, Chia-Hao; Cheng, Po-Chung; Sato, Yoshishige; Huang, Shang-Chieh; Chen, Chu-En; Maruyama, Kotaro; Yamazaki, Yuichiro

    2018-03-01

    Severe process margin in advanced technology node of semiconductor device is controlled by e-beam metrology system and e-beam inspection system with scanning electron microscopy (SEM) image. By using SEM, larger area image with higher image quality is required to collect massive amount of data for metrology and to detect defect in a large area for inspection. Although photoresist is the one of the critical process in semiconductor device manufacturing, observing photoresist pattern by SEM image is crucial and troublesome especially in the case of large image. The charging effect by e-beam irradiation on photoresist pattern causes deterioration of image quality, and it affect CD variation on metrology system and causes difficulties to continue defect inspection in a long time for a large area. In this study, we established a quantitative approach for optimizing e-beam condition with "Die to Database" algorithm of NGR3500 on photoresist pattern to minimize charging effect. And we enhanced the performance of measurement and inspection on photoresist pattern by using optimized e-beam condition. NGR3500 is the geometry verification system based on "Die to Database" algorithm which compares SEM image with design data [1]. By comparing SEM image and design data, key performance indicator (KPI) of SEM image such as "Sharpness", "S/N", "Gray level variation in FOV", "Image shift" can be retrieved. These KPIs were analyzed with different e-beam conditions which consist of "Landing Energy", "Probe Current", "Scanning Speed" and "Scanning Method", and the best e-beam condition could be achieved with maximum image quality, maximum scanning speed and minimum image shift. On this quantitative approach of optimizing e-beam condition, we could observe dependency of SEM condition on photoresist charging. By using optimized e-beam condition, measurement could be continued on photoresist pattern over 24 hours stably. KPIs of SEM image proved image quality during measurement and

  13. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    Science.gov (United States)

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  14. Modelling and measurement of wear particle flow in a dual oil filter system for condition monitoring

    DEFF Research Database (Denmark)

    Henneberg, Morten; Eriksen, René Lynge; Fich, Jens

    2016-01-01

    . The quantity of wear particles in gear oil is analysed with respect to system running conditions. It is shown that the model fits the data in terms of startup “particle burst” phenomenon, quasi-stationary conditions during operation, and clean-up filtration when placed out of operation. In order to establish...... boundary condition for particle burst phenomenon, the release of wear particles from a pleated mesh filter is measured in a test rig and included in the model. The findings show that a dual filter model, with startup phenomenon included, can describe trends in the wear particle flow observed in the gear...... particle generation is made possible by model parameter estimation and identification of an unintended lack of filter change. The model may also be used to optimise system and filtration performance, and to enable continuous condition monitoring....

  15. Investigation of Seepage Meter Measurements in Steady Flow and Wave Conditions.

    Science.gov (United States)

    Russoniello, Christopher J; Michael, Holly A

    2015-01-01

    Water exchange between surface water and groundwater can modulate or generate ecologically important fluxes of solutes across the sediment-water interface. Seepage meters can directly measure fluid flux, but mechanical resistance and surface water dynamics may lead to inaccurate measurements. Tank experiments were conducted to determine effects of mechanical resistance on measurement efficiency and occurrence of directional asymmetry that could lead to erroneous net flux measurements. Seepage meter efficiency was high (average of 93%) and consistent for inflow and outflow under steady flow conditions. Wave effects on seepage meter measurements were investigated in a wave flume. Seepage meter net flux measurements averaged 0.08 cm/h-greater than the expected net-zero flux, but significantly less than theoretical wave-driven unidirectional discharge or recharge. Calculations of unidirectional flux from pressure measurements (Darcy flux) and theory matched well for a ratio of wave length to water depth less than 5, but not when this ratio was greater. Both were higher than seepage meter measurements of unidirectional flux made with one-way valves. Discharge averaged 23% greater than recharge in both seepage meter measurements and Darcy calculations of unidirectional flux. Removal of the collection bag reduced this net discharge. The presence of a seepage meter reduced the amplitude of pressure signals at the bed and resulted in a nearly uniform pressure distribution beneath the seepage meter. These results show that seepage meters may provide accurate measurements of both discharge and recharge under steady flow conditions and illustrate the potential measurement errors associated with dynamic wave environments. © 2014, National Ground Water Association.

  16. The manometric sorptomat—an innovative volumetric instrument for sorption measurements performed under isobaric conditions

    International Nuclear Information System (INIS)

    Kudasik, Mateusz

    2016-01-01

    The present paper discusses the concept of measuring the process of sorption by means of the volumetric method, developed in such a way as to allow measurements performed under isobaric conditions. On the basis of the concept in question, a prototype of a sorption instrument was built: the manometric sorptomat. The paper provides a detailed description of the idea of the instrument, and of the way it works. In order to evaluate the usefulness of the device in sorption measurements carried out under laboratory conditions, comparative studies were conducted, during which the results of sorption measurements obtained with the developed instrument were compared with the results Mateusz obtained with a reference device. The objects of comparison were the sorption capacities of hard coal samples, calculated on the basis of the established courses of the methane sorption process. The results were regarded as compatible if the compared values fell within the range of the measurement uncertainty of the two devices. For the sake of the comparative studies, fifteen granular samples of hard coal—representing the 0.20–0.25 mm grain fraction and coming from various mines of the Upper Silesian Coal Basin—were used. After comparing the results obtained with the original manometric sorptomat with the results obtained with the gravimetric reference device, it was observed that the compatibility of measurements of sorption capacities was over 90%, based on the defined criterion of the measurement compatibility. (paper)

  17. Possibilities of crack mouth opening displacement (CMOD) measurement under boiling and pressurized water reactor conditions

    International Nuclear Information System (INIS)

    Ehling, W.

    1984-01-01

    Fracture mechanics investigations carried out so far in laboratory conditions cover only part of the material stresses, as effects which occur in nuclear powerstations, in particular, such as corrosion and radioactive radiation are largely left out of account. Therefore experiments including these effects were recently carried out in autoclaves, test rigs simulating reactors (HRD experimental plant) and in experimental reactors. An important parameter of experimental fracture mechanics is the measurement of crack opening displacement (COD). The crack opening is measured with socalled clip gauges (transmitters based on strain gauges, which convert mechanical deformation of springs into electrical signals) on standard samples in the laboratory. It was therefore sensible to use these high temperature strain gauges (HTD) for the development of a measuring system for travel for pressurized water and boiling water reactor conditions. (orig.) [de

  18. Using Electronic Health Record Data to Measure Care Quality for Individuals with Multiple Chronic Medical Conditions.

    Science.gov (United States)

    Bayliss, Elizabeth A; McQuillan, Deanna B; Ellis, Jennifer L; Maciejewski, Matthew L; Zeng, Chan; Barton, Mary B; Boyd, Cynthia M; Fortin, Martin; Ling, Shari M; Tai-Seale, Ming; Ralston, James D; Ritchie, Christine S; Zulman, Donna M

    2016-09-01

    To inform the development of a data-driven measure of quality care for individuals with multiple chronic conditions (MCCs) derived from an electronic health record (EHR). Qualitative study using focus groups, interactive webinars, and a modified Delphi process. Research department within an integrated delivery system. The webinars and Delphi process included 17 experts in clinical geriatrics and primary care, health policy, quality assessment, health technology, and health system operations. The focus group included 10 individuals aged 70-87 with three to six chronic conditions selected from a random sample of individuals aged 65 and older with three or more chronic medical conditions. Through webinars and the focus group, input was solicited on constructs representing high-quality care for individuals with MCCs. A working list was created of potential measures representing these constructs. Using a modified Delphi process, experts rated the importance of each possible measure and the feasibility of implementing each measure using EHR data. High-priority constructs reflected processes rather than outcomes of care. High-priority constructs that were potentially feasible to measure included assessing physical function, depression screening, medication reconciliation, annual influenza vaccination, outreach after hospital admission, and documented advance directives. High-priority constructs that were less feasible to measure included goal setting and shared decision-making, identifying drug-drug interactions, assessing social support, timely communication with patients, and other aspects of good customer service. Lower-priority domains included pain assessment, continuity of care, and overuse of screening or laboratory testing. High-quality MCC care should be measured using meaningful process measures rather than outcomes. Although some care processes are currently extractable from electronic data, capturing others will require adapting and applying technology to

  19. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    Science.gov (United States)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  20. Critical experiments, measurements and analyses to establish a crack arrest methodology for nuclear pressure vessel steels. First annual progress report

    International Nuclear Information System (INIS)

    Hahn, G.T.; Gehlen, P.C.; Hoagland, R.G.; Kanninen, M.F.; Popelar, C.; Rosenfield, A.R.; deCampos, V.S.

    1975-08-01

    The one-dimensional, Timoshenko beam-on-a-generalized elastic foundation treatment has been extended to contoured-DCB specimens and to the conditions attending tensile loading in an ordinary testing machine. Preliminary calculations show that the crack propagation and arrest events in contoured DCB specimens are very similar to those calculated for regular DCB-specimens for comparable initiation conditions. In both cases the calculated K/sub Ia/-values are between 44 and 100 percent of K/sub ID,min/ and show a systematic variation with the initiation K/sub Q/-level. In contrast with stiff wedge loading, which favors a continuous event, the calculations for rectangular and contoured DCB specimens in series with an idealized testing machine load train display one or more halts and restarts before the final arrest. A series of experiments designed to distinguish between the K/sub D/ and K/sub a/ approaches to predicting crack arrest are described. Studies of the effect of side grooves in rectangular DCB specimens confirm that grooves with depths representing up to 60 percent of the cross section have no significant effect on either K/sub ID/ or K/sub Ia/ measurements. (auth)

  1. Decision for counting condition of radioactive waste activities measuring by Ludlum detector

    International Nuclear Information System (INIS)

    Bambang-Purwanto

    2000-01-01

    Radioactive waste must measured for activities before be throw out to environment. Measuring will be important in ordered to know activities can be given management direction. For activities radioactive waste on limit threshold value must processed, but for under limit threshold value activities can be throw out to environment. Activities measuring for solid radioactive waste and liquid by (Total, β, γ) Ludlum detector connected Mode-1000 Scaler Counting. Before measuring for solid waste activities was decisioned optimally counting condition, and be obtained are : sample weight 3.5 gram, heating temperature of 125 o C and heating time at 60 minutes. Activities measuring result by total detector ranges from (0.68-0.71) 10 -1 μCi/gram, β detector ranges from (0.24-0.25) 10 -1 μCi/gram and γ detector ranges from (0.35-0.37) μCi/gram

  2. Optimizing culture conditions for production of intra and extracellular inulinase and invertase from Aspergillus niger ATCC 20611 by response surface methodology (RSM).

    Science.gov (United States)

    Dinarvand, Mojdeh; Rezaee, Malahat; Foroughi, Majid

    The aim of this study was obtain a model that maximizes growth and production of inulinase and invertase by Aspergillus niger ATCC 20611, employing response surface methodology (RSM). The RSM with a five-variable and three-level central composite design (CCD) was employed to optimize the medium composition. Results showed that the experimental data could be appropriately fitted into a second-order polynomial model with a coefficient of determination (R 2 ) more than 0.90 for all responses. This model adequately explained the data variation and represented the actual relationships between the parameters and responses. The pH and temperature value of the cultivation medium were the most significant variables and the effects of inoculum size and agitation speed were slightly lower. The intra-extracellular inulinase, invertase production and biomass content increased 10-32 fold in the optimized medium condition (pH 6.5, temperature 30°C, 6% (v/v), inoculum size and 150rpm agitation speed) by RSM compared with medium optimized through the one-factor-at-a-time method. The process development and intensification for simultaneous production of intra-extracellular inulinase (exo and endo inulinase) and invertase from A. niger could be used for industrial applications. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  3. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    Science.gov (United States)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  4. Conditionally reprogrammed cells (CRC) methodology does not allow the in vitro expansion of patient-derived primary and metastatic lung cancer cells.

    Science.gov (United States)

    Sette, Giovanni; Salvati, Valentina; Giordani, Ilenia; Pilozzi, Emanuela; Quacquarini, Denise; Duranti, Enrico; De Nicola, Francesca; Pallocca, Matteo; Fanciulli, Maurizio; Falchi, Mario; Pallini, Roberto; De Maria, Ruggero; Eramo, Adriana

    2018-07-01

    Availability of tumor and non-tumor patient-derived models would promote the development of more effective therapeutics for non-small cell lung cancer (NSCLC). Recently, conditionally reprogrammed cells (CRC) methodology demonstrated exceptional potential for the expansion of epithelial cells from patient tissues. However, the possibility to expand patient-derived lung cancer cells using CRC protocols is controversial. Here, we used CRC approach to expand cells from non-tumoral and tumor biopsies of patients with primary or metastatic NSCLC as well as pulmonary metastases of colorectal or breast cancers. CRC cultures were obtained from both tumor and non-malignant tissues with extraordinary high efficiency. Tumor cells were tracked in vitro through tumorigenicity assay, monitoring of tumor-specific genetic alterations and marker expression. Cultures were composed of EpCAM+ lung epithelial cells lacking tumorigenic potential. NSCLC biopsies-derived cultures rapidly lost patient-specific genetic mutations or tumor antigens. Similarly, pulmonary metastases of colon or breast cancer generated CRC cultures of lung epithelial cells. All CRC cultures examined displayed epithelial lung stem cell phenotype and function. In contrast, brain metastatic lung cancer biopsies failed to generate CRC cultures. In conclusion, patient-derived primary and metastatic lung cancer cells were negatively selected under CRC conditions, limiting the expansion to non-malignant lung epithelial stem cells from either tumor or non-tumor tissue sources. Thus, CRC approach cannot be applied for direct therapeutic testing of patient lung tumor cells, as the tumor-derived CRC cultures are composed of (non-tumoral) airway basal cells. © 2018 UICC.

  5. Patient empowerment in long-term conditions: development and preliminary testing of a new measure

    Science.gov (United States)

    2013-01-01

    Background Patient empowerment is viewed by policy makers and health care practitioners as a mechanism to help patients with long-term conditions better manage their health and achieve better outcomes. However, assessing the role of empowerment is dependent on effective measures of empowerment. Although many measures of empowerment exist, no measure has been developed specifically for patients with long-term conditions in the primary care setting. This study presents preliminary data on the development and validation of such a measure. Methods We conducted two empirical studies. Study one was an interview study to understand empowerment from the perspective of patients living with long-term conditions. Qualitative analysis identified dimensions of empowerment, and the qualitative data were used to generate items relating to these dimensions. Study two was a cross-sectional postal study involving patients with different types of long-term conditions recruited from general practices. The survey was conducted to test and validate our new measure of empowerment. Factor analysis and regression were performed to test scale structure, internal consistency and construct validity. Results Sixteen predominately elderly patients with different types of long-term conditions described empowerment in terms of 5 dimensions (identity, knowledge and understanding, personal control, personal decision-making, and enabling other patients). One hundred and ninety seven survey responses were received from mainly older white females, with relatively low levels of formal education, with the majority retired from paid work. Almost half of the sample reported cardiovascular, joint or diabetes long-term conditions. Factor analysis identified a three factor solution (positive attitude and sense of control, knowledge and confidence in decision making and enabling others), although the structure lacked clarity. A total empowerment score across all items showed acceptable levels of internal

  6. Absolute gravity measurements at three sites characterized by different environmental conditions using two portable ballistic gravimeters

    Science.gov (United States)

    Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro

    2015-03-01

    The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.

  7. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    Science.gov (United States)

    Egert, Amanda; Klotz, James; McLeod, Kyle; Harmon, David

    2014-10-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 hour period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of rumen contents.

  8. The bounds on tracking performance utilising a laser-based linear and angular sensing and measurement methodology for micro/nano manipulation

    International Nuclear Information System (INIS)

    Clark, Leon; Shirinzadeh, Bijan; Tian, Yanling; Zhong, Yongmin

    2014-01-01

    This paper presents an analysis of the tracking performance of a planar three degrees of freedom (DOF) flexure-based mechanism for micro/nano manipulation, utilising a tracking methodology for the measurement of coupled linear and angular motions. The methodology permits trajectories over a workspace with large angular range through the reduction of geometric errors. However, when combining this methodology with feedback control systems, the accuracy of performed manipulations can only be stated within the bounds of the uncertainties in measurement. The dominant sources of error and uncertainty within each sensing subsystem are therefore identified, which leads to a formulation of the measurement uncertainty in the final system outputs, in addition to methods of reducing their magnitude. Specific attention is paid to the analysis of the vision-based subsystem utilised for the measurement of angular displacement. Furthermore, a feedback control scheme is employed to minimise tracking errors, and the coupling of certain measurement errors is shown to have a detrimental effect on the controller operation. The combination of controller tracking errors and measurement uncertainty provides the bounds on the final tracking performance. (paper)

  9. Methodology and measures for preventing unacceptable flow-accelerated corrosion thinning of pipelines and equipment of NPP power generating units

    Science.gov (United States)

    Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.

    2016-10-01

    Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow

  10. Experimental measurement of compressibility coefficients of synthetic sandstone in hydrostatic conditions

    International Nuclear Information System (INIS)

    Asaei, H; Moosavi, M

    2013-01-01

    For the characterization of the mechanical behavior of porous media in elastic conditions, the theory of poroelasticity is used. The number of poroelastic coefficients is greater in elastic conditions because of the complexity of porous media. The laboratory measurement of poroelastic coefficients needs a system that can control and measure the variables of poroelasticity. In this paper, experimental measurements of these coefficients are presented. Laboratory tests are performed using a system designed by the authors. Laboratory hydrostatic tests are performed on cylindrical samples in drained, pore pressure loading, undrained and dry conditions. Compressibilities (bulk and pore compressibility), effective stress and Skempton coefficients are measured by these tests. Samples are made of a composition (sand and cement) and are made by a compaction process synthetically. Calibration tests are performed for the setup to identify possible errors in the system and to correct the results of the main tests. This is done by performing similar compressibility tests at each stress level on a cylindrical steel sample (5.47 mm in diameter) with a longitudinal hole along it (hollow cylinder). A steel sample is used to assume an incompressible sample. The results of the tests are compared with the theory of poroelasticity and the obtained graphs and their errors are analyzed. This study shows that the results of the drained and pore pressure loading tests are compatible with poroelastic formulation, while the undrained results have errors because of extra fluid volume in the pore pressure system and calibration difficulties. (paper)

  11. A Novel Approach to Measuring Muscle Mechanics in Vehicle Collision Conditions

    Directory of Open Access Journals (Sweden)

    Simon Krašna

    2017-06-01

    Full Text Available The aim of the study was to evaluate a novel approach to measuring neck muscle load and activity in vehicle collision conditions. A series of sled tests were performed on 10 healthy volunteers at three severity levels to simulate low-severity frontal impacts. Electrical activity—electromyography (EMG—and muscle mechanical tension was measured bilaterally on the upper trapezius. A novel mechanical contraction (MC sensor was used to measure the tension on the muscle surface. The neck extensor loads were estimated based on the inverse dynamics approach. The results showed strong linear correlation (Pearson’s coefficient = 0.821 between the estimated neck muscle load and the muscle tension measured with the MC sensor. The peak of the estimated neck muscle force delayed 0.2 ± 30.6 ms on average vs. the peak MC sensor signal compared to the average delay of 61.8 ± 37.4 ms vs. the peak EMG signal. The observed differences in EMG and MC sensor collected signals indicate that the MC sensor offers an additional insight into the analysis of the neck muscle load and activity in impact conditions. This approach enables a more detailed assessment of the muscle-tendon complex load of a vehicle occupant in pre-impact and impact conditions.

  12. Measuring coral calcification under ocean acidification: methodological considerations for the 45Ca-uptake and total alkalinity anomaly technique

    Directory of Open Access Journals (Sweden)

    Stephanie Cohen

    2017-09-01

    Full Text Available As the oceans become less alkaline due to rising CO2 levels, deleterious consequences are expected for calcifying corals. Predicting how coral calcification will be affected by on-going ocean acidification (OA requires an accurate assessment of CaCO3 deposition and an understanding of the relative importance that decreasing calcification and/or increasing dissolution play for the overall calcification budget of individual corals. Here, we assessed the compatibility of the 45Ca-uptake and total alkalinity (TA anomaly techniques as measures of gross and net calcification (GC, NC, respectively, to determine coral calcification at pHT 8.1 and 7.5. Considering the differing buffering capacity of seawater at both pH values, we were also interested in how strongly coral calcification alters the seawater carbonate chemistry under prolonged incubation in sealed chambers, potentially interfering with physiological functioning. Our data indicate that NC estimates by TA are erroneously ∼5% and ∼21% higher than GC estimates from 45Ca for ambient and reduced pH, respectively. Considering also previous data, we show that the consistent discrepancy between both techniques across studies is not constant, but largely depends on the absolute value of CaCO3 deposition. Deriving rates of coral dissolution from the difference between NC and GC was not possible and we advocate a more direct approach for the future by simultaneously measuring skeletal calcium influx and efflux. Substantial changes in carbonate system parameters for incubation times beyond two hours in our experiment demonstrate the necessity to test and optimize experimental incubation setups when measuring coral calcification in closed systems, especially under OA conditions.

  13. Measuring cognitive task demands using dual task methodology, subjective self-ratings, and expert judgments : A Validation Study

    NARCIS (Netherlands)

    Révész, Andrea; Michel, Marije; Gilabert, Roger

    2016-01-01

    This study explored the usefulness of dual-task methodology, self-ratings, and expert judgements in assessing task-generated cognitive demands as a way to provide validity evidence for manipulations of task complexity. The participants were 96 students and 61 ESL teachers. The students, 48 English

  14. Measuring the Differences between Traditional Learning and Game-Based Learning Using Electroencephalography (EEG) Physiologically Based Methodology

    Science.gov (United States)

    Chen, Ching-Huei

    2017-01-01

    Students' cognitive states can reflect a learning experience that results in engagement in an activity. In this study, we used electroencephalography (EEG) physiologically based methodology to evaluate students' levels of attention and relaxation, as well as their learning performance within a traditional and game-based learning context. While no…

  15. Measuring Cognitive Task Demands Using Dual-Task Methodology, Subjective Self-Ratings, and Expert Judgments: A Validation Study

    Science.gov (United States)

    Revesz, Andrea; Michel, Marije; Gilabert, Roger

    2016-01-01

    This study explored the usefulness of dual-task methodology, self-ratings, and expert judgments in assessing task-generated cognitive demands as a way to provide validity evidence for manipulations of task complexity. The participants were 96 students and 61 English as a second language (ESL) teachers. The students, 48 English native speakers and…

  16. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia

    NARCIS (Netherlands)

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by

  17. Measurements of void fraction in a heated tube in the rewetting conditions

    International Nuclear Information System (INIS)

    Freitas, R.L.

    1983-01-01

    The methods of void fraction measurements by transmission and diffusion of cold, thermal and epithermal neutrons were studied with cylindrical alluminium pieces simulating the steam. A great set of void fraction found in a wet zone was examined and a particulsar attention was given to the sensitivity effects of the method, mainly for high void fraction. Several aspects of the measurement techniques were analyzed, such as the effect of the phase radial distribution, neutron energy, water tempeture, effect of the void axial gradient. The technique of thermal neutron diffusion measurement was used to measure the axial profile of void fraction in a steady two-phase flow, where the pressure, mass velocity and heat flux are representative of the wet conditions. Experimental results are presented and compared with different void fraction models. (E.G.) [pt

  18. Measurement of stress distributions in truck tyre contact patch in real rolling conditions

    Science.gov (United States)

    Anghelache, Gabriel; Moisescu, Raluca

    2012-12-01

    Stress distributions on three orthogonal directions have been measured across the contact patch of truck tyres using the complex measuring system that contains a transducer assembly with 30 sensing elements placed in the road surface. The measurements have been performed in straight line, in real rolling conditions. Software applications for calibration, data acquisition, and data processing were developed. The influence of changes in inflation pressure and rolling speed on the shapes and sizes of truck tyre contact patch has been shown. The shapes and magnitudes of normal, longitudinal, and lateral stress distributions, measured at low speed, have been presented and commented. The effect of wheel toe-in and camber on the stress distribution results was observed. The paper highlights the impact of the longitudinal tread ribs on the shear stress distributions. The ratios of stress distributions in the truck tyre contact patch have been computed and discussed.

  19. Measurement of grid spacer's enhanced droplet cooling under reflood condition in a PWR by LDA

    International Nuclear Information System (INIS)

    Lee, S.L.; Sheen, H.J.; Cho, S.K.; Issapour, I.; Hua, S.Q.

    1984-01-01

    Reported is an experiment designed for the measurements of grid spacer's enhanced droplet cooling under reflood condition at elevated temperatures in a steam environment. The flow channel consists of a simulated 1.60m-long pressurized water reactor (PWR) fuel rod bundle of 2 x 2 electrically heated rods. Embedded thermocouples are used to measure the rod cladding temperature at various axial levels and an unshielded Chromel-Alumel thermocouple sheathed by a small Inconel tube is traversed in the center of the subchannel to measure the temperatures of the water and steam coolant phases at various levels. The droplet dynamics across the grid spacer is directly obtained by a special laser-Doppler anemometry technique for the in situ simultaneous measurement of velocity and size of droplets through two observation windows on the test channel, one immediately before and one immediately after the grid spacer. Some results are presented and analyzed

  20. Differences in displayed pump flow compared to measured flow under varying conditions during simulated cardiopulmonary bypass.

    LENUS (Irish Health Repository)

    Hargrove, M

    2008-07-01

    Errors in blood flow delivery due to shunting have been reported to reduce flow by, potentially, up to 40-83% during cardiopulmonary bypass. The standard roller-pump measures revolutions per minute and a calibration factor for different tubing sizes calculates and displays flow accordingly. We compared displayed roller-pump flow with ultrasonically measured flow to ascertain if measured flow correlated with the heart-lung pump flow reading. Comparison of flows was measured under varying conditions of pump run duration, temperature, viscosity, varying arterial\\/venous loops, occlusiveness, outlet pressure, use of silicone or polyvinyl chloride (PVC) in the roller race, different tubing diameters, and use of a venous vacuum-drainage device.

  1. Performance of New and Upgraded Detectors for Luminosity and Beam Condition Measurement at CMS

    CERN Document Server

    Leonard, Jessica Lynn

    2015-01-01

    The beam monitoring and luminosity systems of the CMS experiment are enhanced by several new and upgraded sub-detectors to match the challenges of the LHC operation and physics program at increased energy and higher luminosity. A dedicated pixelated luminosity telescope is installed for a fast and precise luminosity measurement. This detector measures coincidences between several three-layer telescopes of silicon pixel detectors to arrive at luminosity for each colliding LHC bunch pair. An upgraded fast beam conditions monitor measures the particle flux using single crystalline diamond sensors. It is equipped with a dedicated front-end ASIC produced in 130 nm CMOS technology. The excellent time resolution is used to separate collision products from machine induced background, thus serving as online luminosity measurement. A new beam-halo monitor at larger radius exploits Cerenkov light from fused silica to provide direction sensitivity and excellent time resolution to separate incoming and outgoing particles....

  2. Quantum reversibility is relative, or does a quantum measurement reset initial conditions?

    Science.gov (United States)

    Zurek, Wojciech H

    2018-07-13

    I compare the role of the information in classical and quantum dynamics by examining the relation between information flows in measurements and the ability of observers to reverse evolutions. I show that in the Newtonian dynamics reversibility is unaffected by the observer's retention of the information about the measurement outcome. By contrast-even though quantum dynamics is unitary, hence, reversible-reversing quantum evolution that led to a measurement becomes, in principle, impossible for an observer who keeps the record of its outcome. Thus, quantum irreversibility can result from the information gain rather than just its loss-rather than just an increase of the (von Neumann) entropy. Recording of the outcome of the measurement resets, in effect, initial conditions within the observer's (branch of) the Universe. Nevertheless, I also show that the observer's friend-an agent who knows what measurement was successfully carried out and can confirm that the observer knows the outcome but resists his curiosity and does not find out the result-can, in principle, undo the measurement. This relativity of quantum reversibility sheds new light on the origin of the arrow of time and elucidates the role of information in classical and quantum physics. Quantum discord appears as a natural measure of the extent to which dissemination of information about the outcome affects the ability to reverse the measurement.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  3. Estimation of Body Weight from Body Size Measurements and Body Condition Scores in Dairy Cows

    DEFF Research Database (Denmark)

    Enevoldsen, Carsten; Kristensen, T.

    1997-01-01

    , and body condition score were consistently associated with BW. The coefficients of multiple determination varied from 80 to 89%. The number of significant terms and the parameter estimates of the models differed markedly among groups of cows. Apparently, these differences were due to breed and feeding...... regimen. Results from this study indicate that a reliable model for estimating BW of very different dairy cows maintained in a wide range of environments can be developed using body condition score, demographic information, and measurements of hip height and hip width. However, for management purposes......The objective of this study was to evaluate the use of hip height and width, body condition score, and relevant demographic information to predict body weight (BW) of dairy cows. Seven regression models were developed from data from 972 observations of 554 cows. Parity, hip height, hip width...

  4. Electrode size and boundary condition independent measurement of the effective piezoelectric coefficient of thin films

    Directory of Open Access Journals (Sweden)

    M. Stewart

    2015-02-01

    Full Text Available The determination of the piezoelectric coefficient of thin films using interferometry is hindered by bending contributions. Using finite element analysis (FEA simulations, we show that the Lefki and Dormans approximations using either single or double-beam measurements cannot be used with finite top electrode sizes. We introduce a novel method for characterising piezoelectric thin films which uses a differential measurement over the discontinuity at the electrode edge as an internal reference, thereby eliminating bending contributions. This step height is shown to be electrode size and boundary condition independent. An analytical expression is derived which gives good agreement with FEA predictions of the step height.

  5. Measurement of Cue-Induced Craving in Human Methamphetamine- Dependent Subjects New Methodological Hopes for Reliable Assessment of Treatment Efficacy

    Directory of Open Access Journals (Sweden)

    Zahra Alam Mehrjerdi

    2011-09-01

    Full Text Available Methamphetamine (MA is a highly addictive psychostimulant drug with crucial impacts on individuals on various levels. Exposure to methamphetamine-associated cues in laboratory can elicit measureable craving and autonomic reactivity in most individuals with methamphetamine dependence and the cue reactivity can model how craving would result in continued drug seeking behaviors and relapse in real environments but study on this notion is still limited. In this brief article, the authors review studies on cue-induced craving in human methamphetamine- dependent subjects in a laboratory-based approach. Craving for methamphetamine is elicited by a variety of methods in laboratory such as paraphernalia, verbal and visual cues and imaginary scripts. In this article, we review the studies applying different cues as main methods of craving incubation in laboratory settings. The brief reviewed literature provides strong evidence that craving for methamphetamine in laboratory conditions is significantly evoked by different cues. Cue-induced craving has important treatment and clinical implications for psychotherapists and clinicians when we consider the role of induced craving in evoking intense desire or urge to use methamphetamine after or during a period of successful craving prevention program. Elicited craving for methamphetamine in laboratory conditions is significantly influenced by methamphetamine-associated cues and results in rapid craving response toward methamphetamine use. This notion can be used as a main core for laboratory-based assessment of treatment efficacy for methamphetamine-dependent patients. In addition, the laboratory settings for studying craving can bridge the gap between somehow-non-reliable preclinical animal model studies and budget demanding randomized clinical trials.

  6. Comparative measurements of proton dechanneling in silicon under channeling, blocking and double alignment conditions

    International Nuclear Information System (INIS)

    Kerkow, H.; Pietsch, H.; Taeubner, F.

    1980-01-01

    Backscattering yields of 300 keV protons are measured under channeling (sub(ch)), blocking (sub(bl)) and double alignment (sub(da)) conditions on (111)-silicon crystals. It was established that the relation sub(ch)-sub(bl)sub(da) is fulfilled within an experimental error of 10% for clean surfaces as well as for vacuum deposited layers on the crystal surface. (author)

  7. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  8. Air-water flow measurement for ERVC conditions by LIF/PIV

    International Nuclear Information System (INIS)

    Yoon, Jong Woong; Jeong, Yong Hoon

    2016-01-01

    Critical heat flux (CHF) of the external reactor vessel wall is a safety limit that indicate the integrity of the reactor vessel during the situation. Many research conducted CHF experiments in the IVR-ERVC conditions. However, the flow velocity field which is an important factor in the CHF mechanism were not studied enough in the IVR-ERVC situations. In this study, flow measurements including velocity vector field and the liquid velocity in the IVR-ERVC conditions were studied. The air-water two phase flow loop simulating IVRERVC conditions was set up and liquid velocity field was measured by LIF/PIV technique in this study. The experiment was conducted with and without air injection conditions. For the air-water flow experiment, liquid velocity at the outside of two phase boundary layer became higher and the two phase boundary layer thickness became smaller when the mass flux increases. The velocity data obtained in this study are expected to improve the CHF correlation in the IVR-ERVC situations.

  9. Air-water flow measurement for ERVC conditions by LIF/PIV

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Woong; Jeong, Yong Hoon [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    Critical heat flux (CHF) of the external reactor vessel wall is a safety limit that indicate the integrity of the reactor vessel during the situation. Many research conducted CHF experiments in the IVR-ERVC conditions. However, the flow velocity field which is an important factor in the CHF mechanism were not studied enough in the IVR-ERVC situations. In this study, flow measurements including velocity vector field and the liquid velocity in the IVR-ERVC conditions were studied. The air-water two phase flow loop simulating IVRERVC conditions was set up and liquid velocity field was measured by LIF/PIV technique in this study. The experiment was conducted with and without air injection conditions. For the air-water flow experiment, liquid velocity at the outside of two phase boundary layer became higher and the two phase boundary layer thickness became smaller when the mass flux increases. The velocity data obtained in this study are expected to improve the CHF correlation in the IVR-ERVC situations.

  10. Specific absorption rate determination of magnetic nanoparticles through hyperthermia measurements in non-adiabatic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Coïsson, M. [INRIM, strada delle Cacce 91, 10135 Torino (Italy); Barrera, G. [INRIM, strada delle Cacce 91, 10135 Torino (Italy); University of Torino, Chemistry Department, via P. Giuria 7, 10125 Torino (Italy); Celegato, F.; Martino, L.; Vinai, F. [INRIM, strada delle Cacce 91, 10135 Torino (Italy); Martino, P. [Politronica srl, via Livorno 60, 10144 Torino (Italy); Ferraro, G. [Center for Space Human Robotics, Istituto Italiano di Tecnologia - IIT, corso Trento 21, 10129 Torino (Italy); Tiberto, P. [INRIM, strada delle Cacce 91, 10135 Torino (Italy)

    2016-10-01

    An experimental setup for magnetic hyperthermia operating in non-adiabatic conditions is described. A thermodynamic model that takes into account the heat exchanged by the sample with the surrounding environment is developed. A suitable calibration procedure is proposed that allows the experimental validation of the model. Specific absorption rate can then be accurately determined just from the measurement of the sample temperature at the equilibrium steady state. The setup and the measurement procedure represent a simplification with respect to other systems requiring calorimeters or crucial corrections for heat flow. Two families of magnetic nanoparticles, one superparamagnetic and one characterised by larger sizes and static hysteresis, have been characterised as a function of field intensity, and specific absorption rate and intrinsic loss power have been obtained. - Highlights: • Development and thermodynamic modelling of a hyperthermia setup operating in non-adiabatic conditions. • Calibration of the experimental setup and validation of the model. • Accurate measurement of specific absorption rate and intrinsic loss power in non-adiabatic conditions.

  11. Chronic condition self-management surveillance: what is and what should be measured?

    Science.gov (United States)

    Ruiz, Sarah; Brady, Teresa J; Glasgow, Russell E; Birkel, Richard; Spafford, Michelle

    2014-06-19

    The rapid growth in chronic disease prevalence, in particular the prevalence of multiple chronic conditions, poses a significant and increasing burden on the health of Americans. Maximizing the use of proven self-management (SM) strategies is a core goal of the US Department of Health and Human Services. Yet, there is no systematic way to assess how much SM or self-management support (SMS) is occurring in the United States. The purpose of this project was to identify appropriate concepts or measures to incorporate into national SM and SMS surveillance. A multistep process was used to identify candidate concepts, assess existing measures, and select high-priority concepts for further development. A stakeholder survey, an environmental scan, subject matter expert feedback, and a stakeholder priority-setting exercise were all used to select the high-priority concepts for development. The stakeholder survey gathered feedback on 32 candidate concepts; 9 concepts were endorsed by more than 66% of respondents. The environmental scan indicated few existing measures that adequately reflected the candidate concepts, and those that were identified were generally specific to a defined condition and not gathered on a population basis. On the basis of the priority setting exercises and environmental scan, we selected 1 concept from each of 5 levels of behavioral influence for immediate development as an SM or SMS indicator. The absence of any available measures to assess SM or SMS across the population highlights the need to develop chronic condition SM surveillance that uses national surveys and other data sources to measure national progress in SM and SMS.

  12. Solubility measurement of iron-selenium compounds under reducing conditions. Research document

    International Nuclear Information System (INIS)

    Kitamura, Akira; Shibata, Masahiro

    2003-03-01

    Chemical behavior of selenium (Se), which was one of the important elements for performance assessment of geological disposal of high-level radioactive waste, was investigated under reducing and iron-containing conditions. A washing method for an iron diselenide (FeSe 2 (cr)) reagent with acidic and basic solutions (0.1 and 1 M HCl and 1 M NaOH) was carried out for the purification of FeSe 2 reagent, which was considered to be a solubility limiting solid for Se under the geological disposal conditions. Furthermore, solubility of FeSe 2 (cr) was measured in alkaline solution (pH: 11 - 13) under reducing conditions (E h vs SHE: -0.4 - 0 V), and thermodynamic data on equilibrium reactions between Se in solution and Se precipitate were obtained. The dependencies of solubility values on pH and redox potential (E h : vs. standard hydrogen electrode) were best interpreted that the solubility limiting solid was not FeSe 2 (cr) but Se(cr) and the aqueous species was SeO 3 2- in the present experimental conditions. The equilibrium constant between Se(cr) and SeO 3 2- at zero ionic strength was determined and compared with literature values. The chemical behavior of Se under geological disposal conditions was discussed. (author)

  13. Event-related potential components as measures of aversive conditioning in humans.

    Science.gov (United States)

    Bacigalupo, Felix; Luck, Steven J

    2018-04-01

    For more than 60 years, the gold standard for assessing aversive conditioning in humans has been the skin conductance response (SCR), which arises from the activation of the peripheral nervous system. Although the SCR has been proven useful, it has some properties that impact the kinds of questions it can be used to answer. In particular, the SCR is slow, reaching a peak 4-5 s after stimulus onset, and it decreases in amplitude after a few trials (habituation). The present study asked whether the late positive potential (LPP) of the ERP waveform could be a useful complementary method for assessing aversive conditioning in humans. The SCR and LPP were measured in an aversive conditioning paradigm consisting of three blocks in which one color was paired with a loud noise (CS+) and other colors were not paired with the noise (CS-). Participants also reported the perceived likelihood of being exposed to the noise for each color. Both SCR and LPP were significantly larger on CS+ trials than on CS- trials. However, SCR decreased steeply after the first conditioning block, whereas LPP and self-reports were stable over blocks. These results indicate that the LPP can be used to assess aversive conditioning and has several useful properties: (a) it is a direct response of the central nervous system, (b) it is fast, with an onset latency of 300 ms, (c) it does not habituate over time. © 2017 Society for Psychophysiological Research.

  14. Methodological study on exposure date of Tiankeng by AMS measurement of in situ produced cosmogenic {sup 36}Cl

    Energy Technology Data Exchange (ETDEWEB)

    Dong Kejun [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); Li Shizhuo [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); CNNC China North Nuclear Fuel Company Ltd., Baotou 014035 (China); He Ming [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); Sasa, Kimikazu [Tandem Accelerator Complex, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Matsushi, Yuki [Disaster Prevention Research Institute, Kyoto University (Japan); Huang Baojian [Institute of Karst Geology, Chinese Academy of Geological Sciences, Guilin 541004 (China); Ruan Xiangdong; Guan Yongjing [College of Physics Science and Technology, Guangxi University, Nanning 530004 (China); Takahashi, Tsutomu [Tandem Accelerator Complex, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Sueki, Keisuke [Graduate School of Pure and Applied Sciences, University of Tsukuba (Japan); Li Chaoli; Wu Shaoyong [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); Wang Xianggao [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); Institute of Karst Geology, Chinese Academy of Geological Sciences, Guilin 541004 (China); Shen Hongtao [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China); College of Physics and Technology, Guangxi Normal University, Guilin 541004 (China); Nagashima, Yasuo [Tandem Accelerator Complex, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Jiang Shan, E-mail: jiangs@ciae.ac.cn [China Institute of Atomic Energy, P.O. Box 275(50), Beijing 102413 (China)

    2013-01-15

    Tiankeng is a typical Karst relief of the late Quaternary Period. Studies on the exposure ages of Tiankeng are very important in geographical research to elucidate the formation condition, the developing process, and the features of biological species. {sup 36}Cl on the surface layer of the rupture cross-section of Tiankeng is largely produced by cosmogenic high-energy neutron induced reactions {sup 40}Ca(n, {alpha}p) and {sup 39}K(n, {alpha}), and has accumulated since the formation of the Tiankeng. Low-energy neutron reaction {sup 35}Cl(n, {gamma}) contributes a small portion of {sup 36}Cl. In this work, the concentration of the cosmogenic {sup 36}Cl in rock samples taken from Dashiwei Tiankeng, Leye County, Guangxi Zhuang Autonomous Region, China, was measured jointly by Accelerator Mass Spectrometry (AMS) laboratories of CIAE and University of Tsukuba in an effort to estimate the formation time (or exposure age) of the Tiankeng. The results show that the exposure time of Da Shiwei Tiankeng is about 26 {+-} 9.6 ka (without erosion correction). The sampling strategy and procedures, experimental set-up, and preliminary results will be presented in detail.

  15. Optimization of Microwave-Assisted Extraction Conditions for Five Major Bioactive Compounds from Flos Sophorae Immaturus (Cultivars of Sophora japonica L. Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Jin-Liang Liu

    2016-03-01

    Full Text Available Microwave-assisted extraction was applied to extract rutin; quercetin; genistein; kaempferol; and isorhamnetin from Flos Sophorae Immaturus. Six independent variables; namely; solvent type; particle size; extraction frequency; liquid-to-solid ratio; microwave power; and extraction time were examined. Response surface methodology using a central composite design was employed to optimize experimental conditions (liquid-to-solid ratio; microwave power; and extraction time based on the results of single factor tests to extract the five major components in Flos Sophorae Immaturus. Experimental data were fitted to a second-order polynomial equation using multiple regression analysis. Data were also analyzed using appropriate statistical methods. Optimal extraction conditions were as follows: extraction solvent; 100% methanol; particle size; 100 mesh; extraction frequency; 1; liquid-to-solid ratio; 50:1; microwave power; 287 W; and extraction time; 80 s. A rapid and sensitive ultra-high performance liquid chromatography method coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry (EIS-Q-TOF MS/MS was developed and validated for the simultaneous determination of rutin; quercetin; genistein; kaempferol; and isorhamnetin in Flos Sophorae Immaturus. Chromatographic separation was accomplished on a Kinetex C18 column (100 mm × 2.1 mm; 2.6 μm at 40 °C within 5 min. The mobile phase consisted of 0.1% aqueous formic acid and acetonitrile (71:29; v/v. Isocratic elution was carried out at a flow rate of 0.35 mL/min. The constituents of Flos Sophorae Immaturus were simultaneously identified by EIS-Q-TOF MS/MS in multiple reaction monitoring mode. During quantitative analysis; all of the calibration curves showed good linear relationships (R2 > 0.999 within the tested ranges; and mean recoveries ranged from 96.0216% to 101.0601%. The precision determined through intra- and inter-day studies showed an RSD% of <2.833%. These

  16. Methodology for setup and data processing of mobile air quality measurements to assess the spatial variability of concentrations in urban environments

    International Nuclear Information System (INIS)

    Van Poppel, Martine; Peters, Jan; Bleux, Nico

    2013-01-01

    A case study is presented to illustrate a methodology for mobile monitoring in urban environments. A dataset of UFP, PM 2.5 and BC concentrations was collected. We showed that repeated mobile measurements could give insight in spatial variability of pollutants at different micro-environments in a city. Streets of contrasting traffic intensity showed increased concentrations by a factor 2–3 for UFP and BC and by 2.5 . The first quartile (P25) of the mobile measurements at an urban background zone seems to be good estimate of the urban background concentration. The local component of the pollutant concentrations was determined by background correction. The use of background correction reduced the number of runs needed to obtain representative results. The results presented, are a first attempt to establish a methodology for setup and data processing of mobile air quality measurements to assess the spatial variability of concentrations in urban environments. -- Highlights: ► Mobile measurements are used to assess the variability of air pollutants in urban environments. ► PM 2.5 , BC and UFP concentrations are presented for zones with different traffic characteristics. ► A methodology for background correction based on the mobile measurements is presented. ► The background concentration is estimated as the 25th percentile of the urban background data. ► The minimum numbers of runs for a representative estimate is reduced after background correction. -- This paper shows that the spatial variability of air pollutants in an urban environment can be assessed by a mobile monitoring methodology including background correction

  17. Measurement of 2D vector magnetic properties under the distorted flux density conditions

    International Nuclear Information System (INIS)

    Urata, Shinya; Todaka, Takashi; Enokizono, Masato; Maeda, Yoshitaka; Shimoji, Hiroyasu

    2006-01-01

    Under distorted flux density condition, it is very difficult to evaluate the field intensity, because there is no criterion for the measurement. In the linear approximation, the measured field intensity waveform (MFI) is compared with the linear synthesis of field intensity waveform (LSFI) in each frequency, and it is shown that they are not in good agreement at higher induction. In this paper, we examined the 2D vector magnetic properties excited by distorted flux density, which consists of the 1st (fundamental frequency: 50 Hz), 3rd, and 5th harmonics. Improved linear synthesis of the field intensity waveform (ILSFI) is proposed as a new estimation method of the field intensity, instead of the conventional linear synthesis of field intensity waveform (LSFI). The usefulness of the proposed ILSFI is demonstrated in the comparison with the measured results

  18. Measurement of 2D vector magnetic properties under the distorted flux density conditions

    Energy Technology Data Exchange (ETDEWEB)

    Urata, Shinya [Department of Electrical and Electronic Engineering, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192 (Japan)]. E-mail: urata@mag.eee.oita-u.ac.jp; Todaka, Takashi [Department of Electrical and Electronic Engineering, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192 (Japan); Enokizono, Masato [Department of Electrical and Electronic Engineering, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192 (Japan); Maeda, Yoshitaka [Department of Electrical and Electronic Engineering, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192 (Japan); Shimoji, Hiroyasu [Department of Electrical and Electronic Engineering, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192 (Japan)

    2006-09-15

    Under distorted flux density condition, it is very difficult to evaluate the field intensity, because there is no criterion for the measurement. In the linear approximation, the measured field intensity waveform (MFI) is compared with the linear synthesis of field intensity waveform (LSFI) in each frequency, and it is shown that they are not in good agreement at higher induction. In this paper, we examined the 2D vector magnetic properties excited by distorted flux density, which consists of the 1st (fundamental frequency: 50 Hz), 3rd, and 5th harmonics. Improved linear synthesis of the field intensity waveform (ILSFI) is proposed as a new estimation method of the field intensity, instead of the conventional linear synthesis of field intensity waveform (LSFI). The usefulness of the proposed ILSFI is demonstrated in the comparison with the measured results.

  19. Power and loads for wind turbines in yawed conditions. Analysis of field measurements and aerodynamic predictions

    Energy Technology Data Exchange (ETDEWEB)

    Boorsma, K. [ECN Wind Energy, Petten (Netherlands)

    2012-11-15

    A description is given of the work carried out within the framework of the FLOW (Far and Large Offshore Wind) project on single turbine performance in yawed flow conditions. Hereto both field measurements as well as calculations with an aerodynamic code are analyzed. The rotors of horizontal axis wind turbines follow the changes in the wind direction for optimal performance. The reason is that the power is expected to decrease for badly oriented rotors. So, insight in the effects of the yaw angle on performance is important for optimization of the yaw control of each individual turbine. The effect of misalignment on performance and loads of a single 2.5 MW wind turbine during normal operation is investigated. Hereto measurements at the ECN Wind Turbine Test Site Wieringermeer (EWTW) are analyzed from December 2004 until April 2009. Also, the influence of yaw is studied using a design code and results from this design code are compared with wind tunnel measurements.

  20. Measurement of total risk of spontaneous abortion: the virtue of conditional risk estimation

    DEFF Research Database (Denmark)

    Modvig, J; Schmidt, L; Damsgaard, M T

    1990-01-01

    The concepts, methods, and problems of measuring spontaneous abortion risk are reviewed. The problems touched on include the process of pregnancy verification, the changes in risk by gestational age and maternal age, and the presence of induced abortions. Methods used in studies of spontaneous...... abortion risk include biochemical assays as well as life table technique, although the latter appears in two different forms. The consequences of using either of these are discussed. It is concluded that no study design so far is appropriate for measuring the total risk of spontaneous abortion from early...... conception to the end of the 27th week. It is proposed that pregnancy may be considered to consist of two or three specific periods and that different study designs should concentrate on measuring the conditional risk within each period. A careful estimate using this principle leads to an estimate of total...

  1. Emerging technologies to measure neighborhood conditions in public health: implications for interventions and next steps.

    Science.gov (United States)

    Schootman, M; Nelson, E J; Werner, K; Shacham, E; Elliott, M; Ratnapradipa, K; Lian, M; McVay, A

    2016-06-23

    Adverse neighborhood conditions play an important role beyond individual characteristics. There is increasing interest in identifying specific characteristics of the social and built environments adversely affecting health outcomes. Most research has assessed aspects of such exposures via self-reported instruments or census data. Potential threats in the local environment may be subject to short-term changes that can only be measured with more nimble technology. The advent of new technologies may offer new opportunities to obtain geospatial data about neighborhoods that may circumvent the limitations of traditional data sources. This overview describes the utility, validity and reliability of selected emerging technologies to measure neighborhood conditions for public health applications. It also describes next steps for future research and opportunities for interventions. The paper presents an overview of the literature on measurement of the built and social environment in public health (Google Street View, webcams, crowdsourcing, remote sensing, social media, unmanned aerial vehicles, and lifespace) and location-based interventions. Emerging technologies such as Google Street View, social media, drones, webcams, and crowdsourcing may serve as effective and inexpensive tools to measure the ever-changing environment. Georeferenced social media responses may help identify where to target intervention activities, but also to passively evaluate their effectiveness. Future studies should measure exposure across key time points during the life-course as part of the exposome paradigm and integrate various types of data sources to measure environmental contexts. By harnessing these technologies, public health research can not only monitor populations and the environment, but intervene using novel strategies to improve the public health.

  2. Systematic measurements of opacity dependence on temperature, density, and atomic number at stellar interior conditions

    Science.gov (United States)

    Nagayama, Taisuke

    2017-10-01

    Model predictions for iron opacity are notably different from measurements performed at matter conditions similar to the boundary between the solar radiation and convection zones. The calculated iron opacities have narrower spectral lines, weaker quasi-continuum at short wavelength, and deeper opacity windows than the measurements. If correct, these measurements help resolve a decade old problem in solar physics. A key question is therefore: What is responsible for the model-data discrepancy? The answer is complex because the experiments are challenging and opacity theories depend on multiple entangled physical processes such as the influence of completeness and accuracy of atomic states, line broadening, contributions from myriad transitions from excited states, and multi-photon absorption processes. To help determine the cause of this discrepancy, a systematic study of opacity variation with temperature, density, and atomic number is underway. Measurements of chromium, iron, and nickel opacities have been performed at two different temperatures and densities. The collection of measured opacities provides constraints on hypotheses to explain the discrepancy. We will discuss