WorldWideScience

Sample records for bandwidth measurement methodologies

  1. 47 CFR 15.35 - Measurement detector functions and bandwidths.

    Science.gov (United States)

    2010-10-01

    ... are based on the use of measurement instrumentation employing an average detector function. Unless... in terms of the average value of the emission, and pulsed operation is employed, the measurement... 47 Telecommunication 1 2010-10-01 2010-10-01 false Measurement detector functions and bandwidths...

  2. Estimating Bandwidth Requirements using Flow-level Measurements

    NARCIS (Netherlands)

    Bruyère, P.; de Oliveira Schmidt, R.; Sperotto, Anna; Sadre, R.; Pras, Aiko

    Bandwidth provisioning is an important task of network management and it is done aiming to meet desired levels of quality of service. Current practices of provisioning are mostly based on rules-of-thumb and use coarse traffic measurements that may lead to problems of under and over dimensioning of

  3. Wide bandwidth transimpedance amplifier for extremely high sensitivity continuous measurements.

    Science.gov (United States)

    Ferrari, Giorgio; Sampietro, Marco

    2007-09-01

    This article presents a wide bandwidth transimpedance amplifier based on the series of an integrator and a differentiator stage, having an additional feedback loop to discharge the standing current from the device under test (DUT) to ensure an unlimited measuring time opportunity when compared to switched discharge configurations while maintaining a large signal amplification over the full bandwidth. The amplifier shows a flat response from 0.6 Hz to 1.4 MHz, the capability to operate with leakage currents from the DUT as high as tens of nanoamperes, and rail-to-rail dynamic range for sinusoidal current signals independent of the DUT leakage current. Also available is a monitor output of the stationary current to track experimental slow drifts. The circuit is ideal for noise spectral and impedance measurements of nanodevices and biomolecules when in the presence of a physiological medium and in all cases where high sensitivity current measurements are requested such as in scanning probe microscopy systems.

  4. Ionospheric Coherence Bandwidth Measurements in the Lower VHF Frequency Range

    Science.gov (United States)

    Suszcynsky, D. M.; Light, M. E.; Pigue, M. J.

    2015-12-01

    The United States Department of Energy's Radio Frequency Propagation (RFProp) experiment consists of a satellite-based radio receiver suite to study various aspects of trans-ionospheric signal propagation and detection in four frequency bands, 2 - 55 MHz, 125 - 175 MHz, 365 - 415 MHz and 820 - 1100 MHz. In this paper, we present simultaneous ionospheric coherence bandwidth and S4 scintillation index measurements in the 32 - 44 MHz frequency range collected during the ESCINT equatorial scintillation experiment. 40-MHz continuous wave (CW) and 32 - 44 MHz swept frequency signals were transmitted simultaneously to the RFProp receiver suite from the Reagan Test Site at Kwajalein Atoll in the Marshall Islands (8.7° N, 167.7° E) in three separate campaigns during the 2014 and 2015 equinoxes. Results show coherence bandwidths as small as ~ 1 kHz for strong scintillation (S4 > 0.7) and indicate a high degree of ionospheric variability and irregularity on 10-m spatial scales. Spread-Doppler clutter effects arising from preferential ray paths to the satellite due to refraction off of isolated density irregularities are also observed and are dominant at low elevation angles. The results are compared to previous measurements and available scaling laws.

  5. Bandwidth trading under misaligned objectives: decentralized measurement-based control

    NARCIS (Netherlands)

    M.R.H. Mandjes (Michel); M. Ramakrishnan

    2006-01-01

    htmlabstractThis paper studies the interplay between a profit-maximizing network and a number of users competing for the finite bandwidth on each link. In our setting, the objectives of the network and the users are ‘misaligned’, in that the prices that optimize the network’s profit do not maximize

  6. Bandwidth trading under misaligned objectives: decentralized measurement-based control

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ramakrishnan, M.

    2008-01-01

    This paper studies the interplay between a profit-maximizing network and a number of users competing for the finite bandwidth on each link. In our setting, the objectives of the network and the users are ‘misaligned’, in that the prices that optimize the network’s profit do not maximize the

  7. 47 CFR 2.1049 - Measurements required: Occupied bandwidth.

    Science.gov (United States)

    2010-10-01

    ... equal to 0.5 percent of the total mean power radiated by a given emission shall be measured under the... negative peaks and tested under the conditions specified in § 73.128 in part 73 of the FCC rules for AM... deviation of the transmitter is measured when a test signal consisting of a band of random noise extending...

  8. Synthetic Pulse Dilation - PMT Model for high bandwidth gamma measurements

    Science.gov (United States)

    Geppert-Kleinrath, H.; Herrmann, H. W.; Kim, Y. H.; Zylstra, A. B.; Meaney, K. D.; Lopez, F. E.; Khater, H.; Horsfield, C. J.; Gales, S.; Leatherland, A.; Hilsabeck, T.; Kilkenny, J. D.; Hares, J. D.; Dymoke-Bradshaw, T.; Milnes, J.

    2017-10-01

    The Cherenkov mechanism used in Gas Cherenkov Detectors (GCD) is exceptionally fast. However, the temporal resolution of GCDs, such as the Gamma Reaction History diagnostic (GRH), is limited by the current state-of-the-art photomultiplier tube (PMT) to 100 ps. The new pulse dilation - PMT (PD-PMT) for NIF allows for a temporal resolution comparable to that of the gas cell, or of 10ps. Enhanced resolution will contribute to the quest for ignition in a crucial way through precision measurement of reaction history and areal density (ρ R) history, leading to better constrained models. Features such as onset of alpha heating, shock reverberations and burn truncation due to dynamically evolving failure modes will become visible for the first time. PD-PMT will be deployed on GCD-3 at NIF in 2018. Our synthetic PD-PMT model evaluates the capabilities of these future measurements, as well as minimum yield requirements for measurements performed in a well at 3.9 m from target chamber center (TCC), and within a diagnostic inserter at 0.2m from TCC.

  9. Radon flux measurement methodologies

    International Nuclear Information System (INIS)

    Nielson, K.K.; Rogers, V.C.

    1984-01-01

    Five methods for measuring radon fluxes are evaluated: the accumulator can, a small charcoal sampler, a large-area charcoal sampler, the ''Big Louie'' charcoal sampler, and the charcoal tent sampler. An experimental comparison of the five flux measurement techniques was also conducted. Excellent agreement was obtained between the measured radon fluxes and fluxes predicted from radium and emanation measurements

  10. Phase loop bandwidth measurements on the advanced photon source 352 MHz rf systems

    International Nuclear Information System (INIS)

    Horan, D.; Nassiri, A.; Schwartz, C.

    1997-01-01

    Phase loop bandwidth tests were performed on the Advanced Photon Source storage ring 352-MHz rf systems. These measurements were made using the HP3563A Control Systems Analyzer, with the rf systems running at 30 kilowatts into each of the storage ring cavities, without stored beam. An electronic phase shifter was used to inject approximately 14 degrees of stimulated phase shift into the low-level rf system, which produced measureable response voltage in the feedback loops without upsetting normal rf system operation. With the PID (proportional-integral-differential) amplifier settings at the values used during accelerator operation, the measurement data revealed that the 3-dB response for the cavity sum and klystron power-phase loops is approximately 7 kHz and 45 kHz, respectively, with the cavities the primary bandwidth-limiting factor in the cavity-sum loop. Data were taken at various PID settings until the loops became unstable. Crosstalk between the two phase loops was measured

  11. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  12. ±25ppm repeatable measurement of trapezoidal pulses with 5MHz bandwidth

    CERN Document Server

    AUTHOR|(SzGeCERN)712364; Arpaia, Pasquale; Cerqueira Bastos, Miguel; Martino, Michele

    2015-01-01

    High-quality measurements of pulses are nowadays widely used in fields such as radars, pulsed lasers, electromagnetic pulse generators, and particle accelerators. Whilst literature is mainly focused on fast systems for nanosecond regime with relaxed metrological requirements, in this paper, the high-performance measurement of slower pulses in microsecond regime is faced. In particular, the experimental proof demonstration for a 15 MS/s,_25 ppm repeatable acquisition system to characterize the flat-top of 3 ms rise-time trapezoidal pulses is given. The system exploits a 5MHz bandwidth circuit for analogue signal processing based on the concept of flat-top removal. The requirements, as well as the conceptual and physical designs are illustrated. Simulation results aimed at assessing the circuit performance are also presented. Finally, an experimental case study on the characterization of a pulsed power supply for the klystrons modulators of the Compact Linear Collider (CLIC) under study at CERN is reported. In ...

  13. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  14. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  15. In-circuit-measurement of parasitic elements in high gain high bandwidth low noise transimpedance amplifiers.

    Science.gov (United States)

    Cochems, P; Kirk, A; Zimmermann, S

    2014-12-01

    Parasitic elements play an important role in the development of every high performance circuit. In the case of high gain, high bandwidth transimpedance amplifiers, the most important parasitic elements are parasitic capacitances at the input and in the feedback path, which significantly influence the stability, the frequency response, and the noise of the amplifier. As these parasitic capacitances range from a few picofarads down to only a few femtofarads, it is nearly impossible to measure them accurately using traditional LCR meters. Unfortunately, they also cannot be easily determined from the transfer function of the transimpedance amplifier, as it contains several overlapping effects and its measurement is only possible when the circuit is already stable. Therefore, we developed an in-circuit measurement method utilizing minimal modifications to the input stage in order to measure its parasitic capacitances directly and with unconditional stability. Furthermore, using the data acquired with this measurement technique, we both proposed a model for the complicated frequency response of high value thick film resistors as they are used in high gain transimpedance amplifiers and optimized our transimpedance amplifier design.

  16. Bandwidth based methodology for designing a hybrid energy storage system for a series hybrid electric vehicle with limited all electric mode

    Science.gov (United States)

    Shahverdi, Masood

    The cost and fuel economy of hybrid electrical vehicles (HEVs) are significantly dependent on the power-train energy storage system (ESS). A series HEV with a minimal all-electric mode (AEM) permits minimizing the size and cost of the ESS. This manuscript, pursuing the minimal size tactic, introduces a bandwidth based methodology for designing an efficient ESS. First, for a mid-size reference vehicle, a parametric study is carried out over various minimal-size ESSs, both hybrid (HESS) and non-hybrid (ESS), for finding the highest fuel economy. The results show that a specific type of high power battery with 4.5 kWh capacity can be selected as the winning candidate to study for further minimization. In a second study, following the twin goals of maximizing Fuel Economy (FE) and improving consumer acceptance, a sports car class Series-HEV (SHEV) was considered as a potential application which requires even more ESS minimization. The challenge with this vehicle is to reduce the ESS size compared to 4.5 kWh, because the available space allocation is only one fourth of the allowed battery size in the mid-size study by volume. Therefore, an advanced bandwidth-based controller is developed that allows a hybridized Subaru BRZ model to be realized with a light ESS. The result allows a SHEV to be realized with 1.13 kWh ESS capacity. In a third study, the objective is to find optimum SHEV designs with minimal AEM assumption which cover the design space between the fuel economies in the mid-size car study and the sports car study. Maximizing FE while minimizing ESS cost is more aligned with customer acceptance in the current state of market. The techniques applied to manage the power flow between energy sources of the power-train significantly affect the results of this optimization. A Pareto Frontier, including ESS cost and FE, for a SHEV with limited AEM, is introduced using an advanced bandwidth-based control strategy teamed up with duty ratio control. This controller

  17. Design of the corona current measurement sensor with wide bandwidth under dc ultra-high-voltage environment

    International Nuclear Information System (INIS)

    Liu, Yingyi; Yuan, Haiwen; Yang, Qinghua; Cui, Yong

    2011-01-01

    The research in the field of corona discharge, which is one of the key technologies, can help us to realize ultra-high-voltage (UHV) power transmission. This paper proposes a new sampling resistance sensor to measure the dc UHV corona current in a wide band. By designing the structural and distributed parameters of the sensor, the UHV dielectric breakdown performance and the wide-band measuring characteristics of the sensor are satisfied. A high-voltage discharge test shows that the designed sensor can work under a 1200 kV dc environment without the occurrence of corona discharge. A frequency characteristic test shows that the measuring bandwidth of the sensor can be improved from the current 4.5 to 20 MHz. The test results in an actual dc UHV transmission line demonstrate that the sensor can accurately measure the corona current under the dc UHV environment

  18. Free space broad-bandwidth tunable laser diode based on Littman configuration for 3D profile measurement

    Science.gov (United States)

    Shirazi, Muhammad Faizan; Kim, Pilun; Jeon, Mansik; Kim, Chang-Seok; Kim, Jeehyun

    2018-05-01

    We developed a tunable laser diode for an optical coherence tomography system that can perform three-dimensional profile measurement using an area scanning technique. The tunable laser diode is designed using an Eagleyard tunable laser diode with a galvano filter. The Littman free space configuration is used to demonstrate laser operation. The line- and bandwidths of this source are 0.27 nm (∼110 GHz) and 43 nm, respectively, at the center wavelength of 860 nm. The output power is 20 mW at an operating current of 150 mA. A step height target is imaged using a wide-area scanning system to show the measurement accuracy of the proposed tunable laser diode. A TEM grid is also imaged to measure the topography and thickness of the sample by proposed tunable laser diode.

  19. The impact of methodology in innovation measurement

    Energy Technology Data Exchange (ETDEWEB)

    Wilhelmsen, L.; Bugge, M.; Solberg, E.

    2016-07-01

    Innovation surveys and rankings such as the Community Innovation Survey (CIS) and Innovation Union Scoreboard (IUS) have developed into influential diagnostic tools that are often used to categorize countries according to their innovation performance and to legitimise innovation policies. Although a number of ongoing processes are seeking to improve existing frameworks for measuring innovation, there are large methodological differences across countries in the way innovation is measured. This causes great uncertainty regarding a) the coherence between data from innovation surveys, b) actual innovativeness of the economy, and c) the validity of research based on innovation data. Against this background we explore empirically how different survey methods for measuring innovation affect reported innovation performance. The analysis is based on a statistical exercise comparing the results from three different methodological versions of the same survey for measuring innovation in the business enterprise sector in Norway. We find striking differences in reported innovation performance depending on how the surveys are carried out methodologically. The paper concludes that reported innovation performance is highly sensitive to and strongly conditioned by methodological context. This represents a need for increased caution and awareness around data collection and research based on innovation data, and not least in terms of aggregation of data and cross-country comparison. (Author)

  20. Broad Bandwidth Metamaterial Antireflection Coatings for Measurement of the Cosmic Microwave Background

    Data.gov (United States)

    National Aeronautics and Space Administration — The Cosmic Microwave Background (CMB) contains a number of faint signals that, if measured, could revolutionize our understandings of the Universe and fundamental...

  1. Improved-Bandwidth Transimpedance Amplifier

    Science.gov (United States)

    Chapsky, Jacob

    2009-01-01

    The widest available operational amplifier, with the best voltage and current noise characteristics, is considered for transimpedance amplifier (TIA) applications where wide bandwidth is required to handle fast rising input signals (as for time-of-flight measurement cases). The added amplifier inside the TIA feedback loop can be configured to have slightly lower voltage gain than the bandwidth reduction factor.

  2. Extending the Bandwidth of a Superdirective First-Order Probe for Spherical Near-Field Antenna Measurements

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.

    2015-01-01

    . This contribution shows that a very narrow frequency bandwidth peculiar to superdirective antennas can be extended to practical values by the proper design of the array elements as well as by relaxing the maximum directivity condition, while keeping |µ| = 1 modes dominating in the radiation spectrum of the antenna...

  3. A low-power tool for measuring acceleration, pressure, and temperature (APT) with wide dynamic range and bandwidth

    Science.gov (United States)

    Heesemann, Martin; Davis, Earl E.; Paros, Jerome; Johnson, Greg; Meldrum, Robert; Scherwath, Martin; Mihaly, Steven

    2017-04-01

    We present a new tool that facilitates the study of inter-related geodetic, geodynamic, seismic, and oceanographic phenomena. It incorporates a temperature compensated tri-axial accelerometer developed by Quartz Seismic Sensors, Inc., a pressure sensor built by Paroscientific Inc., and a low-power, high-precision frequency counter developed by Bennest Enterprises Ltd. and built by RBR, Ltd. The sensors are housed in a 7 cm o.d. titanium pressure case designed for use to full ocean depths (withstands more than 20 km of water pressure). Sampling intervals are programmable from 0.08 s to 1 hr; standard memory can store up to 130 million samples; total power consumption is roughly 115 mW when operating continuously and proportionately lower when operating intermittently (e.g., 2 mW average at 1 sample per min). Serial and USB communications protocols allow a variety of autonomous and cable-connection options. Measurement precision of the order of 10-8 of full scale (e.g., pressure equivalent to 4000 m water depth, acceleration = +/- 3 g) allows observations of pressure and acceleration variations of 0.4 Pa and 0.3 μm s-2. Long-term variations in vertical acceleration are sensitive to displacement through the gravity gradient down to a level of roughly 2 cm, and variations in horizontal acceleration are sensitive to tilt down to a level of 0.03 μrad. With the large dynamic ranges, high sensitivities and broad bandwidth (6 Hz to DC), ground motion associated with microseisms, strong and weak seismic ground motion, tidal loading, and slow and rapid geodynamic deformation - all normally studied using disparate instruments - can be observed with a single tool. Installation in the marine environment is accomplished by pushing the tool roughly 1 m vertically below the seafloor with a submersible or remotely operated vehicle, with no profile remaining above the seafloor to cause current-induced noise. The weight of the tool is designed to match the sediment it displaces to

  4. Methodological Challenges in Measuring Child Maltreatment

    Science.gov (United States)

    Fallon, Barbara; Trocme, Nico; Fluke, John; MacLaurin, Bruce; Tonmyr, Lil; Yuan, Ying-Ying

    2010-01-01

    Objective: This article reviewed the different surveillance systems used to monitor the extent of reported child maltreatment in North America. Methods: Key measurement and definitional differences between the surveillance systems are detailed and their potential impact on the measurement of the rate of victimization. The infrastructure…

  5. Relative Hazard and Risk Measure Calculation Methodology

    International Nuclear Information System (INIS)

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.; Andrews, William B.; Walton, Terry L.

    2003-01-01

    The RHRM equations, as represented in methodology and code presented in this report, are primarily a collection of key factors normally used in risk assessment that are relevant to understanding the hazards and risks associated with projected mitigation, cleanup, and risk management activities. The RHRM code has broad application potential. For example, it can be used to compare one mitigation, cleanup, or risk management activity with another, instead of just comparing it to just the fixed baseline. If the appropriate source term data are available, it can be used in its non-ratio form to estimate absolute values of the associated controlling hazards and risks. These estimated values of controlling hazards and risks can then be examined to help understand which mitigation, cleanup, or risk management activities are addressing the higher hazard conditions and risk reduction potential at a site. Graphics can be generated from these absolute controlling hazard and risk values to graphically compare these high hazard and risk reduction potential conditions. If the RHRM code is used in this manner, care must be taken to specifically define and qualify (e.g., identify which factors were considered and which ones tended to drive the hazard and risk estimates) the resultant absolute controlling hazard and risk values

  6. A methodology to measure the degre of managerial innovation

    OpenAIRE

    Ayhan, Mustafa Batuhan; Oztemel, Ercan

    2014-01-01

    Purpose: The main objective of this study is to introduce the concept of managerial innovation and to propose a quantitative methodology to measure the degree of managerial innovation capability by analyzing the evolution of the techniques used for management functions.Design/methodology/approach: The methodology mainly focuses on the different techniques used for each management functions namely; Planning, Organizing, Leading, Controlling and Coordinating. These functions are studied and the...

  7. Chemical Industry Bandwidth Study

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2006-12-01

    The Chemical Bandwidth Study provides a snapshot of potentially recoverable energy losses during chemical manufacturing. The advantage of this study is the use of "exergy" analysis as a tool for pinpointing inefficiencies.

  8. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  9. Glass Industry Bandwidth Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rue, David M. [Gas Technology Inst., Des Plaines, IL (United States)

    2006-07-01

    This is a study on energy use and potential savings, or "bandwidth" study, for several glassmaking processes. Intended to provide a realistic estimate of the potential amount of energy that can be saved in an industrial process, the "bandwidth" refers to the difference between the amount of energy that would be consumed in a process using commercially available technology versus the minimum amount of energy needed to achieve those same results.

  10. Industrial Glass Bandwidth Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rue, David M. [Gas Technology Inst., Des Plaines, IL (United States); Servaites, James [Gas Technology Inst., Des Plaines, IL (United States); Wolf, Warren [Gas Technology Inst., Des Plaines, IL (United States)

    2007-08-01

    This is a study on energy use and potential savings, or "bandwidth" study, for several glassmaking processes. Intended to provide a realistic estimate of the potential amount of energy that can be saved in an industrial process, the "bandwidth" refers to the difference between the amount of energy that would be consumed in a process using commercially available technology versus the minimum amount of energy needed to achieve those same results.

  11. Methodology for performing measurements to release material from radiological control

    International Nuclear Information System (INIS)

    Durham, J.S.; Gardner, D.L.

    1993-09-01

    This report describes the existing and proposed methodologies for performing measurements of contamination prior to releasing material for uncontrolled use at the Hanford Site. The technical basis for the proposed methodology, a modification to the existing contamination survey protocol, is also described. The modified methodology, which includes a large-area swipe followed by a statistical survey, can be used to survey material that is unlikely to be contaminated for release to controlled and uncontrolled areas. The material evaluation procedure that is used to determine the likelihood of contamination is also described

  12. A methodology to measure the degre of managerial innovation

    Directory of Open Access Journals (Sweden)

    Mustafa Batuhan Ayhan

    2014-01-01

    Full Text Available Purpose: The main objective of this study is to introduce the concept of managerial innovation and to propose a quantitative methodology to measure the degree of managerial innovation capability by analyzing the evolution of the techniques used for management functions.Design/methodology/approach: The methodology mainly focuses on the different techniques used for each management functions namely; Planning, Organizing, Leading, Controlling and Coordinating. These functions are studied and the different techniques used for them are listed. Since the techniques used for these management functions evolve in time due to technological and social changes, a methodology is required to measure the degree of managerial innovation capability. This competency is measured through an analysis performed to point out which techniques used for each of these functions.Findings: To check the validity and applicability of this methodology, it is implemented to a manufacturing company. Depending on the results of the implementation, enhancements are suggested to the company for each function to survive in the changing managerial conditionsResearch limitations/implications: The primary limitation of this study is the implementation area. Although the study is implemented in just a single manufacturing company, it is welcomed to apply the same methodology to measure the managerial innovation capabilities of other manufacturing companies. Moreover, the model is ready to be adapted to different sectors although it is mainly prepared for manufacturing sector.Originality/value: Although innovation management is widely studied, managerial innovation is a new concept and introduced to measure the capability to challenge the changes occur in managerial functions. As a brief this methodology aims to be a pioneer in the field of managerial innovation regarding the evolution of management functions. Therefore it is expected to lead more studies to inspect the progress of

  13. Terahertz-bandwidth coherence measurements of a quantum dash laser in passive and active mode-locking operation.

    Science.gov (United States)

    Martin, Eamonn; Watts, Regan; Bramerie, Laurent; Shen, Alexandre; Gariah, Harry; Blache, Fabrice; Lelarge, Francois; Barry, Liam

    2012-12-01

    This research carries out coherence measurements of a 42.7 GHz quantum dash (QDash) semiconductor laser when passively, electrically, and optically mode-locked. Coherence of the spectral lines from the mode-locked laser is determined by examining the radio frequency beat-tone linewidth as the mode spacing is increased up to 1.1 THz. Electric-field measurements of the QDash laser are also presented, from which a comparison between experimental results and accepted theory for coherence in passively mode-locked lasers has been performed.

  14. Measuring the Quality of Publications : New Methodology and Case Study

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; van Groenendaal, W.J.H.

    2000-01-01

    n practice, it is important to evaluate the quality of research, in order to make decisions on tenure, funding, and so on. This article develops a methodology using citations to measure the quality of journals, proceedings, and book publishers. (Citations are also used by the Science and Social

  15. Risk importance measures in the dynamic flowgraph methodology

    International Nuclear Information System (INIS)

    Tyrväinen, T.

    2013-01-01

    This paper presents new risk importance measures applicable to a dynamic reliability analysis approach with multi-state components. Dynamic reliability analysis methods are needed because traditional methods, such as fault tree analysis, can describe system's dynamical behaviour only in limited manner. Dynamic flowgraph methodology (DFM) is an approach used for analysing systems with time dependencies and feedback loops. The aim of DFM is to identify root causes of a top event, usually representing the system's failure. Components of DFM models are analysed at discrete time points and they can have multiple states. Traditional risk importance measures developed for static and binary logic are not applicable to DFM as such. Some importance measures have previously been developed for DFM but their ability to describe how components contribute to the top event is fairly limited. The paper formulates dynamic risk importance measures that measure the importances of states of components and take the time-aspect of DFM into account in a logical way that supports the interpretation of results. Dynamic risk importance measures are developed as generalisations of the Fussell-Vesely importance and the risk increase factor. -- Highlights: • New risk importance measures are developed for the dynamic flowgraph methodology. • Dynamic risk importance measures are formulated for states of components. • An approach to handle failure modes of a component in DFM is presented. • Dynamic risk importance measures take failure times into account. • Component's influence on the system's reliability can be analysed in detail

  16. Determining a hopping polaron's bandwidth from its Seebeck coefficient: Measuring the disorder energy of a non-crystalline semiconductor

    International Nuclear Information System (INIS)

    Emin, David

    2016-01-01

    Charge carriers that execute multi-phonon hopping generally interact strongly enough with phonons to form polarons. A polaron's sluggish motion is linked to slowly shifting atomic displacements that severely reduce the intrinsic width of its transport band. Here a means to estimate hopping polarons' bandwidths from Seebeck-coefficient measurements is described. The magnitudes of semiconductors' Seebeck coefficients are usually quite large (>k/|q| = 86 μV/K) near room temperature. However, in accord with the third law of thermodynamics, Seebeck coefficients must vanish at absolute zero. Here, the transition of the Seebeck coefficient of hopping polarons to its low-temperature regime is investigated. The temperature and sharpness of this transition depend on the concentration of carriers and on the width of their transport band. This feature provides a means of estimating the width of a polaron's transport band. Since the intrinsic broadening of polaron bands is very small, less than the characteristic phonon energy, the net widths of polaron transport bands in disordered semiconductors approach the energetic disorder experienced by their hopping carriers, their disorder energy

  17. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  18. Methodology for interpretation of fissile mass flow measurements

    International Nuclear Information System (INIS)

    March-Leuba, J.; Mattingly, J.K.; Mullens, J.A.

    1997-01-01

    This paper describes a non-intrusive measurement technique to monitor the mass flow rate of fissile material in gaseous or liquid streams. This fissile mass flow monitoring system determines the fissile mass flow rate by relying on two independent measurements: (1) a time delay along a given length of pipe, which is inversely proportional to the fissile material flow velocity, and (2) an amplitude measurement, which is proportional to the fissile concentration (e.g., grams of 235 U per length of pipe). The development of this flow monitor was first funded by DOE/NE in September 95, and initial experimental demonstration by ORNL was described in the 37th INMM meeting held in July 1996. This methodology was chosen by DOE/NE for implementation in November 1996; it has been implemented in hardware/software and is ready for installation. This paper describes the methodology used to interpret the data measured by the fissile mass flow monitoring system and the models used to simulate the transport of fission fragments from the source location to the detectors

  19. The statistical bandwidth of Butterworth filters

    Science.gov (United States)

    Davy, J. L.; Dunn, I. P.

    1987-06-01

    The precision of standard architectural acoustic measurements is a function of the statistical bandwidth of the band pass filters used in the measurements. The International and United States Standards on octave and fractional octave-band filters which specify the band pass filters used in architectural acoustics measurements give the effective bandwidth, but unfortunately not the statistical bandwidth of the filters. Both these Standards are currently being revised and both revisions require the use of Butterworth filter characteristics. In this paper it is shown theoretically that the ratio of statistical bandwidth to effective bandwidth for an nth order Butterworth band pass filter is {2n}/{(2n-1)}. This is verified experimentally for third-octave third-order Butterworth band pass filters. It is also shown experimentally that this formula is approximately correct for some non-Butterworth third-octave third-order band pass filters. Because of the importance of Butterworth filters in the revised Standards, the theory of Butterworth filters is reviewed and the formulae for Butterworth filters given in both revised Standards are derived.

  20. THE MEASUREMENT METHODOLOGY IMPROVEMENT OF THE HORIZONTAL IRREGULARITIES IN PLAN

    Directory of Open Access Journals (Sweden)

    O. M. Patlasov

    2015-08-01

    Full Text Available Purpose. Across the track superstructure (TSS there are structures where standard approach to the decision on the future of their operation is not entirely correct or acceptable. In particular, it concerns the track sections which are sufficiently quickly change their geometric parameters: the radius of curvature, angle of rotation, and the like. As an example, such portions of TSS may include crossovers where their component is within the so-called connecting part, which at a sufficiently short length, substantially changes curvature. The estimation of the position in terms of a design on the basis of the existing technique (by the difference in the adjacent arrows bending is virtually impossible. Therefore it is proposed to complement and improve the methodology for assessing the situation of the curve in plan upon difference in the adjacent versine. Methodology. The possible options for measuring horizontal curves in the plan were analyzed. The most adequate method, which does not contradict existing on the criterion of the possibility of using established standards was determined. The ease of measurement and calculation was took into account. Findings. Qualitative and quantitative verification of the proposed and existing methods showed very good agreement of the measurement results. This gives grounds to assert that this methodology can be recommended to the workers of track facilities in the assessment of horizontal irregularities in plan not only curves, but also within the connecting part of switch congresses. Originality. The existing method of valuation of the geometric position of the curves in the plan was improved. It does not create new regulations, and all results are evaluated by existing norms. Practical value. The proposed technique makes it possible, without creating a new regulatory framework, to be attached to existing one, and expanding the boundaries of its application. This method can be used not only for ordinary curves

  1. Indoor radon measurements and methodologies in Latin American countries

    International Nuclear Information System (INIS)

    Canoba, A.; Lopez, F.O.; Arnaud, M.I.; Oliveira, A.A.; Neman, R.S.; Hadler, J.C.; Iunes, P.J.; Paulo, S.R.; Osorio, A.M.; Aparecido, R.; Rodriguez, C.; Moreno, V.; Vasquez, R.; Espinosa, G.; Golzarri, J.I.; Martinez, T.; Navarrete, M.; Cabrera, I.; Segovia, N.; Pena, P.; Tamez, E.; Pereyra, P.; Lopez-Herrera, M.E.; Sajo-Bohus, L.

    2001-01-01

    According to the current international guidelines concerning environmental problems, it is necessary to evaluate and to know the indoor radon levels, specially since most of the natural radiation dose to man comes from radon gas and its progeny. Several countries have established National Institutions and National Programs for the study of radon and its connection with lung cancer risk and public health. The aim of this work is to present the indoor radon measurements and the detection methods used for different regions of Latin America (LA) in countries such as Argentina, Brazil, Ecuador, Mexico, Peru and Venezuela. This study shows that the passive radon devices based on alpha particle nuclear track methodology (NTM) is one of the more generalized methods in LA for long term indoor radon measurements, CR-39, LR-115 and Makrofol being the more commonly used detector materials. The participating institutions and the radon level measurements in the different countries are presented in this contribution

  2. Methodology for measurement in schools and kindergartens: experiences

    International Nuclear Information System (INIS)

    Fotjikova, I.; Navratilova Rovenska, K.

    2015-01-01

    In more than 1500 schools and preschool facilities, long-term radon measurement was carried out in the last 3 y. The negative effect of thermal retrofitting on the resulting long-term radon averages is evident. In some of the facilities, low ventilation rates and correspondingly high radon levels were found, so it was recommended to change ventilation habits. However, some of the facilities had high radon levels due to its ingress from soil gas. Technical measures should be undertaken to reduce radon exposure in this case. The paper presents the long-term experiences with the two-stage measurement methodology for investigation of radon levels in school and preschool facilities and its possible improvements. (authors)

  3. Ultrahigh bandwidth signal processing

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo

    2016-01-01

    Optical time lenses have proven to be very versatile for advanced optical signal processing. Based on a controlled interplay between dispersion and phase-modulation by e.g. four-wave mixing, the processing is phase-preserving, an hence useful for all types of data signals including coherent multi......-level modulation founats. This has enabled processing of phase-modulated spectrally efficient data signals, such as orthogonal frequency division multiplexed (OFDM) signa In that case, a spectral telescope system was used, using two time lenses with different focal lengths (chirp rates), yielding a spectral...... regeneratio These operations require a broad bandwidth nonlinear platform, and novel photonic integrated nonlinear platform like aluminum gallium arsenide nano-waveguides used for 1.28 Tbaud optical signal processing will be described....

  4. Fast Faraday Cup With High Bandwidth

    Science.gov (United States)

    Deibele, Craig E [Knoxville, TN

    2006-03-14

    A circuit card stripline Fast Faraday cup quantitatively measures the picosecond time structure of a charged particle beam. The stripline configuration maintains signal integrity, and stitching of the stripline increases the bandwidth. A calibration procedure ensures the measurement of the absolute charge and time structure of the charged particle beam.

  5. Methodological aspects of EEG and Body dynamics measurements during motion.

    Directory of Open Access Journals (Sweden)

    Pedro eReis

    2014-03-01

    Full Text Available EEG involves recording, analysis, and interpretation of voltages recorded on the human scalp originating from brain grey matter. EEG is one of the favorite methods to study and understand processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements, that are performed in response to the environment. However, there are methodological difficulties when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions of how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determination of real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks.

  6. Cutter Connectivity Bandwidth Study

    Science.gov (United States)

    2002-10-01

    The goal of this study was to determine how much bandwidth is required for cutters to meet emerging data transfer requirements. The Cutter Connectivity Business Solutions Team with guidance front the Commandant's 5 Innovation Council sponsored this study. Today, many Coast Guard administrative and business functions are being conducted via electronic means. Although our larger cutters can establish part-time connectivity using commercial satellite communications (SATCOM) while underway, there are numerous complaints regarding poor application performance. Additionally, smaller cutters do not have any standard means of underway connectivity. The R&D study shows the most important factor affecting web performance and enterprise applications onboard cutters was latency. Latency describes the time it takes the signal to reach the satellite and come back down through space. The latency due to use of higher orbit satellites is causing poor application performance and inefficient use of expensive SATCOM links. To improve performance, the CC must, (1) reduce latency by using alternate communications links such as low-earth orbit satellites, (2) tailor applications to the SATCOM link and/or (3) optimize protocols used for data communication to minimize time required by present applications to establish communications between the user and the host systems.

  7. 47 CFR 74.535 - Emission and bandwidth.

    Science.gov (United States)

    2010-10-01

    ... digital modulation in paragraph (a) of this section, the resolution bandwidth (BRES) of the measuring...), adjusted upward to the nearest greater resolution bandwidth available on the measuring equipment. In all... frequency energy outside the assigned channel. Upon notice by the FCC to the station licensee that...

  8. Relative Hazard and Risk Measure Calculation Methodology Rev 1

    International Nuclear Information System (INIS)

    Stenner, Robert D.; White, Michael K.; Strenge, Dennis L.; Aaberg, Rosanne L.; Andrews, William B.

    2000-01-01

    Documentation of the methodology used to calculate relative hazard and risk measure results for the DOE complex wide risk profiles. This methodology is used on major site risk profiles. In February 1997, the Center for Risk Excellence (CRE) was created and charged as a technical, field-based partner to the Office of Science and Risk Policy (EM-52). One of the initial charges to the CRE is to assist the sites in the development of ''site risk profiles.'' These profiles are to be relatively short summaries (periodically updated) that present a broad perspective on the major risk related challenges that face the respective site. The risk profiles are intended to serve as a high-level communication tool for interested internal and external parties to enhance the understanding of these risk-related challenges. The risk profiles for each site have been designed to qualitatively present the following information: (1) a brief overview of the site, (2) a brief discussion on the historical mission of the site, (3) a quote from the site manager indicating the site's commitment to risk management, (4) a listing of the site's top risk-related challenges, (5) a brief discussion and detailed table presenting the site's current risk picture, (6) a brief discussion and detailed table presenting the site's future risk reduction picture, and (7) graphic illustrations of the projected management of the relative hazards at the site. The graphic illustrations were included to provide the reader of the risk profiles with a high-level mental picture to associate with all the qualitative information presented in the risk profile. Inclusion of these graphic illustrations presented the CRE with the challenge of how to fold this high-level qualitative risk information into a system to produce a numeric result that would depict the relative change in hazard, associated with each major risk management action, so it could be presented graphically. This report presents the methodology developed

  9. Methodological considerations for measuring glucocorticoid metabolites in feathers

    Science.gov (United States)

    Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  10. Evaluation of electrical broad bandwidth impedance spectroscopy as a tool for body composition measurement in cows in comparison with body measurements and the deuterium oxide dilution method.

    Science.gov (United States)

    Schäff, C T; Pliquett, U; Tuchscherer, A; Pfuhl, R; Görs, S; Metges, C C; Hammon, H M; Kröger-Koch, C

    2017-05-01

    Body fatness and degree of body fat mobilization in cows vary enormously during their reproduction cycle and influence energy partitioning and metabolic adaptation. The objective of the study was to test bioelectrical impedance spectroscopy (BIS) as a method for predicting fat depot mass (FDM), in living cows. The FDM is defined as the sum of subcutaneous, omental, mesenteric, retroperitoneal, and carcass fat mass. Bioelectrical impedance spectroscopy is compared with the prediction of FDM from the deuterium oxide (DO) dilution method and from body conformation measurements. Charolais × Holstein Friesian (HF; = 18; 30 d in milk) crossbred cows and 2 HF (lactating and nonlactating) cows were assessed by body conformation measurements, BIS, and the DO dilution method. The BCS of cows was a mean of 3.68 (SE 0.64). For the DO dilution method, a bolus of 0.23 g/kg BW DO (60 atom%) was intravenously injected and deuterium (D) enrichment was analyzed in plasma and whey by stabile isotope mass spectrometry, and total body water content was calculated. Impedance measurement was performed using a 4-electrode interface and time domain-based measurement system consisting of a voltage/current converter for applying current stimulus and an amplifier for monitoring voltage across the sensor electrodes. For the BIS, we used complex impedances over three frequency decades that delivers information on intra- and extracellular water and capacity of cell membranes. Impedance data (resistance of extra- and intracellular space, cell membrane capacity, and phase angle) were extracted 1) by simple curve fit to extract the resistance at direct current and high frequency and 2) by using an electrical equivalent circuit. Cows were slaughtered 7 d after BIS and D enrichment measurements and dissected for the measurement of FDM. Multiple linear regression analyses were performed to predict FDM based on data obtained from body conformation measurements, BIS, and D enrichment, and applied

  11. Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review.

    Science.gov (United States)

    Chung, Stephanie T; Chacko, Shaji K; Sunehag, Agneta L; Haymond, Morey W

    2015-12-01

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  12. Measurement of Quality of Life I. A Methodological Framework

    Directory of Open Access Journals (Sweden)

    Soren Ventegodt

    2003-01-01

    Full Text Available Despite the widespread acceptance of quality of life (QOL as the ideal guideline in healthcare and clinical research, serious conceptual and methodological problems continue to plague this area. In an attempt to remedy this situation, we propose seven criteria that a quality-of-life concept must meet to provide a sound basis for investigation by questionnaire. The seven criteria or desiderata are: (1 an explicit definition of quality of life; (2 a coherent philosophy of human life from which the definition is derived; (3 a theory that operationalizes the philosophy by specifying unambiguous, nonoverlapping, and jointly exhaustive questionnaire items; (4 response alternatives that permit a fraction-scale interpretation; (5 technical checks of reproducibility; (6 meaningfulness to investigators, respondents, and users; and (7 an overall aesthetic appeal of the questionnaire. These criteria have guided the design of a validated 5-item generic, global quality-of-life questionnaire (QOL5, and a validated 317-item generic, global quality-of-life questionnaire (SEQOL, administered to a well-documented birth cohort of 7,400 Danes born in 1959�1961, as well as to a reference sample of 2,500 Danes. Presented in outline, the underlying integrative quality-of-life (IQOL theory is a meta-theory. To illustrate the seven criteria at work, we show the extent to which they are satisfied by one of the eight component theories. Next, two sample results of our investigation are presented: satisfaction with one's sex life has the expected covariation with one's quality of life, and so does mother's smoking during pregnancy, albeit to a much smaller extent. It is concluded that the methodological framework presented has proved helpful in designing a questionnaire that is capable of yielding acceptably valid and reliable measurements of global and generic quality of life.

  13. Experiences in Traceroute and Bandwidth Change Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Logg, C

    2004-06-23

    SLAC has been studying end-to-end WAN bandwidth availability and achievability for 2.5 years via IEPM-BW [1]. IEPM-BW performs network intensive tests every 90 minutes. Based on that experience we have also developed a light weight available bandwidth (ABwE [2]) measurement tool that can make a measurement within a second. We are now extending this to a WAN measurement and detection system (IEPM-LITE) aimed at more quickly detecting and troubleshooting network performance problems and also to be more friendly on lower performance paths. IEPM-LITE uses ping, forward traceroutes, and ABwE sensors to monitor, in close to real-time, Round Trip Times (RTT), changes in available bandwidth and routes to and from target hosts. This paper discusses the experiences, techniques and algorithms used to detect and report on significant traceroute and bandwidth changes. The ultimate aim is to develop a lightweight WAN network performance monitoring system that can detect, in near real time, significant changes and generate alerts.

  14. Experiences in Traceroute and Bandwidth Change Analysis

    International Nuclear Information System (INIS)

    Logg, C

    2004-01-01

    SLAC has been studying end-to-end WAN bandwidth availability and achievability for 2.5 years via IEPM-BW [1]. IEPM-BW performs network intensive tests every 90 minutes. Based on that experience we have also developed a light weight available bandwidth (ABwE [2]) measurement tool that can make a measurement within a second. We are now extending this to a WAN measurement and detection system (IEPM-LITE) aimed at more quickly detecting and troubleshooting network performance problems and also to be more friendly on lower performance paths. IEPM-LITE uses ping, forward traceroutes, and ABwE sensors to monitor, in close to real-time, Round Trip Times (RTT), changes in available bandwidth and routes to and from target hosts. This paper discusses the experiences, techniques and algorithms used to detect and report on significant traceroute and bandwidth changes. The ultimate aim is to develop a lightweight WAN network performance monitoring system that can detect, in near real time, significant changes and generate alerts

  15. Advanced quantitative measurement methodology in physics education research

    Science.gov (United States)

    Wang, Jing

    parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The

  16. Adaptability of laser diffraction measurement technique in soil physics methodology

    Science.gov (United States)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  17. Speciated arsenic in air: measurement methodology and risk assessment considerations.

    Science.gov (United States)

    Lewis, Ari S; Reid, Kim R; Pollock, Margaret C; Campleman, Sharan L

    2012-01-01

    Accurate measurement of arsenic (As) in air is critical to providing a more robust understanding of arsenic exposures and associated human health risks. Although there is extensive information available on total arsenic in air, less is known on the relative contribution of each arsenic species. To address this data gap, the authors conducted an in-depth review of available information on speciated arsenic in air. The evaluation included the type of species measured and the relative abundance, as well as an analysis of the limitations of current analytical methods. Despite inherent differences in the procedures, most techniques effectively separated arsenic species in the air samples. Common analytical techniques such as inductively coupled plasma mass spectrometry (ICP-MS) and/or hydride generation (HG)- or quartz furnace (GF)-atomic absorption spectrometry (AAS) were used for arsenic measurement in the extracts, and provided some of the most sensitive detection limits. The current analysis demonstrated that, despite limited comparability among studies due to differences in seasonal factors, study duration, sample collection methods, and analytical methods, research conducted to date is adequate to show that arsenic in air is mainly in the inorganic form. Reported average concentrations of As(III) and As(V) ranged up to 7.4 and 10.4 ng/m3, respectively, with As(V) being more prevalent than As(III) in most studies. Concentrations of the organic methylated arsenic compounds are negligible (in the pg/m3 range). However because of the variability in study methods and measurement methodology, the authors were unable to determine the variation in arsenic composition as a function of source or particulate matter (PM) fraction. In this work, the authors include the implications of arsenic speciation in air on potential exposure and risks. The authors conclude that it is important to synchronize sample collection, preparation, and analytical techniques in order to generate

  18. Methodology for Measuring the Complexity of Enterprise Information Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-07-01

    Full Text Available The complexity of enterprise information systems is currently a challenge faced not only by IT professionals and project managers, but also by the users of such systems. Current methodologies and frameworks used to design and implement information systems do not specifically deal with the issue of their complexity and, apart from few exceptions, do not at all attempt to simplify the complexity. This article presents the author's own methodology for managing complexity, which can be used to complement any other methodology and which helps limit the growth of complexity. It introduces its own definition and metric of complexity, which it defines as the sum of entities of the individual UML models of the given system, which are selected according to the MMDIS methodology so as to consistently describe all relevant content dimensions of the system. The main objective is to propose a methodology to manage information system complexity and to verify it in practice on a real-life SAP implementation project.

  19. Reconstitution of Low Bandwidth Reaction History

    International Nuclear Information System (INIS)

    May, M.; Clancy, T.; Fittinghoff, D.; Gennaro, P.; Hagans, K.; Halvorson, G.; Lowry, M.; Perry, T.; Roberson, P.; Smith, D.; Teruya, A.; Blair, J.; Davis, B.; Hunt, E.; Emkeit, B.; Galbraith, J.; Kelly, B.; Montoya, R.; Nickel, G.; Ogle, J.; Wilson, K.; Wood, M.

    2004-01-01

    The goal of the Test Readiness Program is to transition to a 24 month test readiness posture and if approved move to an 18-month posture. One of the key components of the Test Readiness Program necessary to meet this goal is the reconstitution of the important diagnostics. Since the end of nuclear testing, the ability to field diagnostics on a nuclear test has deteriorated. Reconstitution of diagnostics before those who had experience in nuclear testing either retire or leave is essential to achieving a shorter test readiness posture. Also, the data recording systems have not been used since the end of testing. This report documents the reconstitution of one vital diagnostic: the low bandwidth reaction history diagnostic for FY04. Reaction history is one of the major diagnostics that has been used on all LLNL and LANL tests since the early days of nuclear testing. Reaction history refers to measuring the time history of the gamma and neutron output from a nuclear test. This gives direct information on the nuclear reactions taking place in the device. The reaction history measurements are one of the prime measurements the nuclear weapon scientists use to validate their models of device performance. All tests currently under consideration require the reaction history diagnostic. Thus moving to a shorter test readiness posture requires the reconstitution of the ability to make reaction history measurements. Reconstitution of reaction history was planned to be in two steps. Reaction history measurements that have been used in the past can be broadly placed into two categories. The most common type of reaction history and the one that has been performed on virtually all nuclear tests is termed low bandwidth reaction history. This measurement has a time response that is limited by the bandpass of kilometer length coaxial cables. When higher bandwidth has been required for specific measurements, fiber optic techniques have been used. This is referred to as high-bandwidth

  20. Physically Connected Stacked Patch Antenna Design with 100% Bandwidth

    KAUST Repository

    Klionovski, Kirill; Shamim, Atif

    2017-01-01

    Typically, stacked patch antennas are parasitically coupled and provide larger bandwidth than a single patch antenna. Here, we show a stacked patch antenna design where square patches with semi-circular cutouts are physically connected to each other. This arrangement provides 100% bandwidth from 23.9–72.2 GHz with consistent high gain (5 dBi or more) across the entire bandwidth. In another variation, a single patch loaded with a superstrate provides 83.5% bandwidth from 25.6–62.3 GHz. The mechanism of bandwidth enhancement is explained through electromagnetic simulations. Measured reflection coefficient, radiation patterns and gain results confirm the extremely wideband performance of the design.

  1. Physically Connected Stacked Patch Antenna Design with 100% Bandwidth

    KAUST Repository

    Klionovski, Kirill

    2017-11-01

    Typically, stacked patch antennas are parasitically coupled and provide larger bandwidth than a single patch antenna. Here, we show a stacked patch antenna design where square patches with semi-circular cutouts are physically connected to each other. This arrangement provides 100% bandwidth from 23.9–72.2 GHz with consistent high gain (5 dBi or more) across the entire bandwidth. In another variation, a single patch loaded with a superstrate provides 83.5% bandwidth from 25.6–62.3 GHz. The mechanism of bandwidth enhancement is explained through electromagnetic simulations. Measured reflection coefficient, radiation patterns and gain results confirm the extremely wideband performance of the design.

  2. Measuring Instruments Control Methodology Performance for Analog Electronics Remote Labs

    Directory of Open Access Journals (Sweden)

    Unai Hernandez-Jayo

    2012-12-01

    Full Text Available This paper presents the work that has been developed in parallel to the VISIR project. The objective of this paper is to present the results of the validations processes that have been carried out to check the control methodology. This method has been developed with the aim of being independent of the instruments of the labs.

  3. Scanner image methodology (SIM) to measure dimensions of leaves ...

    African Journals Online (AJOL)

    A scanner image methodology was used to determine plant dimensions, such as leaf area, length and width. The values obtained using SIM were compared with those recorded by the LI-COR leaf area meter. Bias, linearity, reproducibility and repeatability (R&R) were evaluated for SIM. Different groups of leaves were ...

  4. High-bandwidth memory interface

    CERN Document Server

    Kim, Chulwoo; Song, Junyoung

    2014-01-01

    This book provides an overview of recent advances in memory interface design at both the architecture and circuit levels. Coverage includes signal integrity and testing, TSV interface, high-speed serial interface including equalization, ODT, pre-emphasis, wide I/O interface including crosstalk, skew cancellation, and clock generation and distribution. Trends for further bandwidth enhancement are also covered.   • Enables readers with minimal background in memory design to understand the basics of high-bandwidth memory interface design; • Presents state-of-the-art techniques for memory interface design; • Covers memory interface design at both the circuit level and system architecture level.

  5. High-Bandwidth Dynamic Full-Field Profilometry for Nano-Scale Characterization of MEMS

    International Nuclear Information System (INIS)

    Chen, L-C; Huang, Y-T; Chang, P-B

    2006-01-01

    The article describes an innovative optical interferometric methodology to delivery dynamic surface profilometry with a measurement bandwidth up to 10MHz or higher and a vertical resolution up to 1 nm. Previous work using stroboscopic microscopic interferometry for dynamic characterization of micro (opto)electromechanical systems (M(O)EMS) has been limited in measurement bandwidth mainly within a couple of MHz. For high resonant mode analysis, the stroboscopic light pulse is insufficiently short to capture the moving fringes from dynamic motion of the detected structure. In view of this need, a microscopic prototype based on white-light stroboscopic interferometry with an innovative light superposition strategy was developed to achieve dynamic full-field profilometry with a high measurement bandwidth up to 10MHz or higher. The system primarily consists of an optical microscope, on which a Mirau interferometric objective embedded with a piezoelectric vertical translator, a high-power LED light module with dual operation modes and light synchronizing electronics unit are integrated. A micro cantilever beam used in AFM was measured to verify the system capability in accurate characterisation of dynamic behaviours of the device. The full-field seventh-mode vibration at a vibratory frequency of 3.7MHz can be fully characterized and nano-scale vertical measurement resolution as well as tens micrometers of vertical measurement range can be performed

  6. High-Bandwidth Dynamic Full-Field Profilometry for Nano-Scale Characterization of MEMS

    Energy Technology Data Exchange (ETDEWEB)

    Chen, L-C [Graduate Institute of Automation Technology, National Taipei University of Technology, 1 Sec. 3 Chung-Hsiao East Rd., Taipei, 106, Taiwan (China); Huang, Y-T [Graduate Institute of Automation Technology, National Taipei University of Technology, 1 Sec. 3 Chung-Hsiao East Rd., Taipei, 106, Taiwan (China); Chang, P-B [Graduate Institute of Mechanical and Electrical Engineering, National Taipei University of Technology, 1 Sec. 3 Chung-Hsiao East Rd., Taipei, 106, Taiwan (China)

    2006-10-15

    The article describes an innovative optical interferometric methodology to delivery dynamic surface profilometry with a measurement bandwidth up to 10MHz or higher and a vertical resolution up to 1 nm. Previous work using stroboscopic microscopic interferometry for dynamic characterization of micro (opto)electromechanical systems (M(O)EMS) has been limited in measurement bandwidth mainly within a couple of MHz. For high resonant mode analysis, the stroboscopic light pulse is insufficiently short to capture the moving fringes from dynamic motion of the detected structure. In view of this need, a microscopic prototype based on white-light stroboscopic interferometry with an innovative light superposition strategy was developed to achieve dynamic full-field profilometry with a high measurement bandwidth up to 10MHz or higher. The system primarily consists of an optical microscope, on which a Mirau interferometric objective embedded with a piezoelectric vertical translator, a high-power LED light module with dual operation modes and light synchronizing electronics unit are integrated. A micro cantilever beam used in AFM was measured to verify the system capability in accurate characterisation of dynamic behaviours of the device. The full-field seventh-mode vibration at a vibratory frequency of 3.7MHz can be fully characterized and nano-scale vertical measurement resolution as well as tens micrometers of vertical measurement range can be performed.

  7. Bandwidth tunable amplifier for recording biopotential signals.

    Science.gov (United States)

    Hwang, Sungkil; Aninakwa, Kofi; Sonkusale, Sameer

    2010-01-01

    This paper presents a low noise, low power, bandwidth tunable amplifier for bio-potential signal recording applications. By employing depletion-mode pMOS transistor in diode configuration as a tunable sub pA current source to adjust the resistivity of MOS-Bipolar pseudo-resistor, the bandwidth is adjusted without any need for a separate band-pass filter stage. For high CMRR, PSRR and dynamic range, a fully differential structure is used in the design of the amplifier. The amplifier achieves a midband gain of 39.8dB with a tunable high-pass cutoff frequency ranging from 0.1Hz to 300Hz. The amplifier is fabricated in 0.18εm CMOS process and occupies 0.14mm(2) of chip area. A three electrode ECG measurement is performed using the proposed amplifier to show its feasibility for low power, compact wearable ECG monitoring application.

  8. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  9. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods

  10. New methodology of measurement the unsteady thermal cooling of objects

    Science.gov (United States)

    Winczek, Jerzy

    2018-04-01

    The problems of measurements of unsteady thermal turbulent flow affect a many of domains, such as heat energy, manufacturing technologies, and many others. The subject of the study is focused on the analysis of current state of the problem, overview of the design solutions and methods to measure non-stationary thermal phenomena, presentation, and choice of adequate design of the cylinder, development of the method to measure and calculate basic values that characterize the process of heat exchange on the model surface.

  11. Review of high bandwidth fiber optics radiation sensors

    International Nuclear Information System (INIS)

    Lyons, P.B.

    1985-01-01

    This paper summarizes the use of fiber optics or guided optical systems for radiation sensors. It is limited a passive systems wherein electrical is not required at the sensor location. However, electrically powered light sources, receivers and/or recorders may still be required for detection and data storage in sensor system operation. This paper emphasizes sensor technologies that permit high bandwidth measurements of transient radiation levels, and will also discuss several low bandwidth applications. 60 refs

  12. Measurement of testosterone in human sexuality research: methodological considerations.

    Science.gov (United States)

    van Anders, Sari M; Goldey, Katherine L; Bell, Sarah N

    2014-02-01

    Testosterone (T) and other androgens are incorporated into an increasingly wide array of human sexuality research, but there are a number of issues that can affect or confound research outcomes. This review addresses various methodological issues relevant to research design in human studies with T; unaddressed, these issues may introduce unwanted noise, error, or conceptual barriers to interpreting results. Topics covered are (1) social and demographic factors (gender and sex; sexual orientations and sexual diversity; social/familial connections and processes; social location variables), (2) biological rhythms (diurnal variation; seasonality; menstrual cycles; aging and menopause), (3) sample collection, handling, and storage (saliva vs. blood; sialogogues, saliva, and tubes; sampling frequency, timing, and context; shipping samples), (4) health, medical issues, and the body (hormonal contraceptives; medications and nicotine; health conditions and stress; body composition, weight, and exercise), and (5) incorporating multiple hormones. Detailing a comprehensive set of important issues and relevant empirical evidence, this review provides a starting point for best practices in human sexuality research with T and other androgens that may be especially useful for those new to hormone research.

  13. Measuring service line competitive position. A systematic methodology for hospitals.

    Science.gov (United States)

    Studnicki, J

    1991-01-01

    To mount a broad effort aimed at improving their competitive position for some service or group of services, hospitals have begun to pursue product line management techniques. A few hospitals have even reorganized completely under the product line framework. The benefits include focusing accountability for operations and results, facilitating coordination between departments and functions, stimulating market segmentation, and promoting rigorous examination of new and existing programs. As part of its strategic planning process, a suburban Baltimore hospital developed a product line management methodology with six basic steps: (1) define the service lines (which they did by grouping all existing diagnosis-related groups into 35 service lines), (2) determine the contribution of each service line to total inpatient volume, (3) determine trends in service line volumes (by comparing data over time), (4) derive a useful comparison group (competing hospitals or groups of hospitals with comparable size, scope of services, payer mix, and financial status), (5) review multiple time frames, and (6) summarize the long- and short-term performance of the hospital's service lines to focus further analysis. This type of systematic and disciplined analysis can become part of a permanent strategic intelligence program. When hospitals have such a program in place, their market research, planning, budgeting, and operations will be tied together in a true management decision support system.

  14. RAID Disk Arrays for High Bandwidth Applications

    Science.gov (United States)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  15. Aqueduct: a methodology to measure and communicate global water risks

    Science.gov (United States)

    Gassert, Francis; Reig, Paul

    2013-04-01

    , helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.

  16. Measurement methodology of natural radioactivity in the thermal establishments

    International Nuclear Information System (INIS)

    Ameon, R.; Robe, M.C.

    2004-11-01

    The thermal baths have been identified as an activity susceptible to expose to ionizing radiations the workers through the natural sources of radon and radon 220. The new regulation obliges these facilities to realize radioactivity measurements. The principal ways of exposure are radon and its daughters inhalation,, exposure to gamma radiation, ingestion of radioelements in thermal waters. I.R.S.N. proposes two methods of measurements of the natural radioactivity in application to the regulation relative to the protection of persons and workers. Some principles to reduce exposure to radon are reminded. (N.C.)

  17. Laser scattering methodology for measuring particulates in the air

    Directory of Open Access Journals (Sweden)

    Carlo Giglioni

    2009-03-01

    Full Text Available A description is given of the laser scattering method to measure PM10, PM2.5 and PM1 dusts in confirmed environments (museums, libraries, archives, art galleries, etc.. Such equipment presents many advantages, in comparison with those which are actually in use, not only from an analytic but also from a functional point of view.

  18. Sensible organizations: technology and methodology for automatically measuring organizational behavior.

    Science.gov (United States)

    Olguin Olguin, Daniel; Waber, Benjamin N; Kim, Taemie; Mohan, Akshay; Ara, Koji; Pentland, Alex

    2009-02-01

    We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks.

  19. Methodological Issues in Measures of Imitative Reaction Times

    Science.gov (United States)

    Aicken, Michael D.; Wilson, Andrew D.; Williams, Justin H. G.; Mon-Williams, Mark

    2007-01-01

    Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to…

  20. Investigation of an Error Theory for Conjoint Measurement Methodology.

    Science.gov (United States)

    1983-05-01

    1ybren, 1982; Srinivasan and Shocker, 1973a, 1973b; Ullrich =d Cumins , 1973; Takane, Young, and de Leeui, 190C; Yount,, 1972’. & OEM...procedures as a diagnostic tool. Specifically, they used the oompted STRESS - value and a measure of fit they called PRECAP that could be obtained

  1. Flux Measurements in Trees: Methodological Approach and Application to Vineyards

    Directory of Open Access Journals (Sweden)

    Francesca De Lorenzi

    2008-03-01

    Full Text Available In this paper a review of two sap flow methods for measuring the transpiration in vineyards is presented. The objective of this work is to examine the potential of detecting transpiration in trees in response to environmental stresses, particularly the high concentration of ozone (O3 in troposphere. The methods described are the stem heat balance and the thermal dissipation probe; advantages and disadvantages of each method are detailed. Applications of both techniques are shown, in two large commercial vineyards in Southern Italy (Apulia and Sicily, submitted to semi-arid climate. Sap flow techniques allow to measure transpiration at plant scale and an upscaling procedure is necessary to calculate the transpiration at the whole stand level. Here a general technique to link the value of transpiration at plant level to the canopy value is presented, based on experimental relationships between transpiration and biometric characteristics of the trees. In both vineyards transpiration measured by sap flow methods compares well with evapotranspiration measured by micrometeorological techniques at canopy scale. Moreover soil evaporation component has been quantified. In conclusion, comments about the suitability of the sap flow methods for studying the interactions between trees and ozone are given.

  2. The relevance of segments reports – measurement methodology

    Directory of Open Access Journals (Sweden)

    Tomasz Zimnicki

    2017-09-01

    Full Text Available The segment report is one of the areas of financial statements, and it obliges a company to provide infor-mation about the economic situation in each of its activity areas. The article evaluates the change of segment reporting standards from IAS14R to IFRS8 in the context of feature relevance. It presents the construction of a measure which allows the relevance of segment disclosures to be determined. The created measure was used to study periodical reports published by companies listed on the main market of the Warsaw Stock Exchange from three reporting periods – 2008, 2009 and 2013. Based on the re-search results, it was found that the change of segment reporting standards from IAS14R to IFRS8 in the context of relevance was legitimate.

  3. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  4. Spectral reflectance measurement methodologies for TUZ Golu field campaign

    CSIR Research Space (South Africa)

    Boucher, Y

    2011-07-01

    Full Text Available panel. However, it's possible to take this into account in the uncertainty budget. 2.2. Instrumentation and sampling area All of the teams except INPE used a Fieldspec ASD spectroradiometer. In this case, the user has to choose the aperture... of the objective and the ASD configuration (the number of elementary spectra averaged to get one measurement, here typically 10, and the number of dark current acquisitions, here typically 25). The spectroradiometer must also be optimized from time to time...

  5. Performance Evaluation and Measurement of the Organization in Strategic Analysis and Control: Methodological Aspects

    OpenAIRE

    Živan Ristić; Neđo Balaban

    2006-01-01

    Information acquired by measuring and evaluation are a necessary condition for good decision-making in strategic management. This work deals with : (a) Methodological aspects of evaluation (kinds of evaluation, metaevaluation) and measurement (supposition of isomorphism in measurement, kinds and levels of measurement, errors in measurement and the basic characteristics of measurement) (b) Evaluation and measurement of potential and accomplishments of the organization in Kaplan-Norton perspect...

  6. Methodological issues in measures of imitative reaction times.

    Science.gov (United States)

    Aicken, Michael D; Wilson, Andrew D; Williams, Justin H G; Mon-Williams, Mark

    2007-04-01

    Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to spatial cues. In an attempt to replicate these findings, we manipulated the salience of the visual cue and found that we could reverse the advantage of the imitative cue over the spatial cue. We suggest that participants utilised a simple visuomotor mechanism to perform all aspects of this task, with performance being driven by the relative visual salience of the stimuli. Imitation is a more complex motor skill that would constitute an inefficient strategy for rapid performance.

  7. [Measuring nursing care times--methodologic and documentation problems].

    Science.gov (United States)

    Bartholomeyczik, S; Hunstein, D

    2001-08-01

    The time for needed nursing care is one important measurement as a basic for financing care. In Germany the Long Term Care Insurance (LTCI) reimburses nursing care depending on the time family care givers need to complete selected activities. The LTCI recommends certain time ranges for these activities, which are wholly compensatory, as a basic for assessment. The purpose is to enhance assessment justice and comparability. With the example of a German research project, which had to investigate the duration of these activities and the reasons for differences, questions are raised about some definition and interpretation problems. There are definition problems, since caring activities especially in private households are nearly never performed as clearly defined modules. Moreover, often different activities are performed simultaneously. However, the most important question is what exactly time numbers can say about the essentials of nursing care.

  8. Methodological NMR imaging developments to measure cerebral perfusion

    International Nuclear Information System (INIS)

    Pannetier, N.

    2010-12-01

    This work focuses on acquisition techniques and physiological models that allow characterization of cerebral perfusion by MRI. The arterial input function (AIF), on which many models are based, is measured by a technique of optical imaging at the carotid artery in rats. The reproducibility and repeatability of the AIF are discussed and a model function is proposed. Then we compare two techniques for measuring the vessel size index (VSI) in rats bearing a glioma. The reference technique, using a USPIO contrast agent (CA), faces the dynamic approach that estimates this parameter during the passage of a bolus of Gd. This last technique has the advantage of being used clinically. The results obtained at 4.7 T by both approaches are similar and use of VSI in clinical protocols is strongly encouraged at high field. The mechanisms involved (R1 and R2* relaxivities) were then studied using a multi gradient -echoes approach. A multi-echoes spiral sequence is developed and a method that allows the refocusing between each echo is presented. This sequence is used to characterize the impact of R1 effects during the passage of two successive injections of Gd. Finally, we developed a tool for simulating the NMR signal on a 2D geometry taking into account the permeability of the BBB and the CA diffusion in the interstitial space. At short TE, the effect of diffusion on the signal is negligible. In contrast, the effects of diffusion and permeability may be separated at long echo time. Finally we show that during the extravasation of the CA, the local magnetic field homogenization due to the decrease of the magnetic susceptibility difference at vascular interfaces is quickly balanced by the perturbations induced by the increase of the magnetic susceptibility difference at the cellular interfaces in the extravascular compartment. (author)

  9. Changing methodology for measuring airborne radioactive discharges from nuclear facilities

    International Nuclear Information System (INIS)

    Glissmeyer, J.A.; Ligotke, M.W.

    1995-05-01

    The US Environmental Protection Agency (USEPA) requires that measurements of airborne radioactive discharges from nuclear facilities be performed following outdated methods contained in the American National Standards Institute (ANSI) N13.1-1969 Guide to Sampling Airborne Radioactive Materials in Nuclear Facilities. Improved methods are being introduced via two paths. First, the ANSI standard is being revised, and second, EPA's equivalency granting process is being used to implement new technology on a case-by-case or broad basis. The ANSI standard is being revised by a working group under the auspices of the Health Physics Society Standards Committee. The revised standard includes updated methods based on current technology and a performance-based approach to design. The performance-based standard will present new challenges, especially in the area of performance validation. Progress in revising the standard is discussed. The US Department of Energy recently received approval from the USEPA for an alternate approach to complying with air-sampling regulations. The alternate approach is similar to the revised ANSI standard. New design tools include new types of sample extraction probes and a model for estimating line-losses for particles and radioiodine. Wind tunnel tests are being performed on various sample extraction probes for use at small stacks. The data show that single-point sampling probes are superior to ANSI-Nl3.1-1969 style multiple-point sample extraction probes

  10. Measurement of plasma adenosine concentration: methodological and physiological considerations

    International Nuclear Information System (INIS)

    Gewirtz, H.; Brown, P.; Most, A.S.

    1987-01-01

    This study tested the hypothesis that measurements of plasma adenosine concentration made on samples of blood obtained in dipyridamole and EHNA (i.e., stopping solution) may be falsely elevated as a result of ongoing in vitro production and accumulation of adenosine during sample processing. Studies were performed with samples of anticoagulated blood obtained from anesthesized domestic swine. Adenosine concentration of ultra filtrated plasma was determined by HPLC. The following parameters were evaluated: (i) rate of clearance of [ 3 H]adenosine added to plasma, (ii) endogenous adenosine concentration of matched blood samples obtained in stopping solution alone, stopping solution plus EDTA, and perchloric acid (PCA), (iii) plasma and erythrocyte endogenous adenosine concentration in nonhemolyzed samples, and (iv) plasma adenosine concentration of samples hemolyzed in the presence of stopping solution alone or stopping solution plus EDTA. We observed that (i) greater than or equal to 95% of [ 3 H]adenosine added to plasma is removed from it by formed elements of the blood in less than 20 s, (ii) plasma adenosine concentration of samples obtained in stopping solution alone is generally 10-fold greater than that of matched samples obtained in stopping solution plus EDTA, (iii) deliberate mechanical hemolysis of blood samples obtained in stopping solution alone resulted in substantial augmentation of plasma adenosine levels in comparison with matched nonhemolyzed specimens--addition of EDTA to stopping solution prevented this, and (iv) adenosine content of blood samples obtained in PCA agreed closely with the sum of plasma and erythrocyte adenosine content of samples obtained in stopping solution plus EDTA

  11. VAR Methodology Used for Exchange Risk Measurement and Prevention

    Directory of Open Access Journals (Sweden)

    Florentina Balu

    2006-05-01

    Full Text Available In this article we discuss one of the modern risk measuring techniques Value-at-Risk (VaR. Currently central banks in major money centers, under the auspices of the BIS Basle Committee, adopt the VaR system to evaluate the market risk of their supervised banks. Banks regulators ask all commercial banks to report VaRs with their internal models. Value at risk (VaR is a powerful tool for assessing market risk, but it also imposes a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolio’s market value. Value at Risk (VAR calculates the maximum loss expected (or worst case scenario on an investment, over a given time period and given a specified degree of confidence. There are three methods by which VaR can be calculated: the historical simulation, the variance-covariance method and the Monte Carlo simulation. The variance-covariance method is easiest because you need to estimate only two factors: average return and standard deviation. However, it assumes returns are well-behaved according to the symmetrical normal curve and that historical patterns will repeat into the future. The historical simulation improves on the accuracy of the VAR calculation, but requires more computational data; it also assumes that “past is prologue”. The Monte Carlo simulation is complex, but has the advantage of allowing users to tailor ideas about future patterns that depart from historical patterns.

  12. VAR Methodology Used for Exchange Risk Measurement and Prevention

    Directory of Open Access Journals (Sweden)

    Ion Stancu

    2006-03-01

    Full Text Available In this article we discuss one of the modern risk measuring techniques Value-at-Risk (VaR. Currently central banks in major money centers, under the auspices of the BIS Basle Committee, adopt the VaR system to evaluate the market risk of their supervised banks. Banks regulators ask all commercial banks to report VaRs with their internal models. Value at risk (VaR is a powerful tool for assessing market risk, but it also imposes a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolio’s market value. Value at Risk (VAR calculates the maximum loss expected (or worst case scenario on an investment, over a given time period and given a specified degree of confidence. There are three methods by which VaR can be calculated: the historical simulation, the variance-covariance method and the Monte Carlo simulation. The variance-covariance method is easiest because you need to estimate only two factors: average return and standard deviation. However, it assumes returns are well-behaved according to the symmetrical normal curve and that historical patterns will repeat into the future. The historical simulation improves on the accuracy of the VAR calculation, but requires more computational data; it also assumes that “past is prologue”. The Monte Carlo simulation is complex, but has the advantage of allowing users to tailor ideas about future patterns that depart from historical patterns.

  13. Methodologies for Measuring Judicial Performance: The Problem of Bias

    Directory of Open Access Journals (Sweden)

    Jennifer Elek

    2014-12-01

    Full Text Available Concerns about gender and racial bias in the survey-based evaluations of judicial performance common in the United States have persisted for decades. Consistent with a large body of basic research in the psychological sciences, recent studies confirm that the results from these JPE surveys are systematically biased against women and minority judges. In this paper, we explain the insidious manner in which performance evaluations may be biased, describe some techniques that may help to reduce expressions of bias in judicial performance evaluation surveys, and discuss the potential problem such biases may pose in other common methods of performance evaluation used in the United States and elsewhere. We conclude by highlighting the potential adverse consequences of judicial performance evaluation programs that rely on biased measurements. Durante décadas ha habido una preocupación por la discriminación por género y racial en las evaluaciones del rendimiento judicial basadas en encuestas, comunes en Estados Unidos. De acuerdo con un gran corpus de investigación básica en las ciencias psicológicas, estudios recientes confirman que los resultados de estas encuestas de evaluación del rendimiento judicial están sistemáticamente sesgados contra las mujeres y los jueces de minorías. En este artículo se explica la manera insidiosa en que las evaluaciones de rendimiento pueden estar sesgadas, se describen algunas técnicas que pueden ayudar a reducir las expresiones de sesgo en los estudios de evaluación del rendimiento judicial, y se debate el problema potencial que estos sesgos pueden plantear en otros métodos comunes de evaluación del rendimiento utilizados en Estados Unidos y otros países. Se concluye destacando las posibles consecuencias adversas de los programas de evaluación del rendimiento judicial que se basan en mediciones sesgadas. DOWNLOAD THIS PAPER FROM SSRN: http://ssrn.com/abstract=2533937

  14. Methodological possibilities for using the electron and ion energy balance in thermospheric complex measurements

    International Nuclear Information System (INIS)

    Serafimov, K.B.; Serafimova, M.K.

    1991-01-01

    Combination of ground based measurements for determination of basic thermospheric characteristics is proposed . An expression for the energy transport between components of space plasma is also derived and discussed within the framework of the presented methodology which could be devided into the folowing major sections: 1) application of ionosonde, absorption measurements, TEC-measurements using Faradey radiation or the differential Doppler effect; 2) ground-based airglow measurements; 3) airglow and palsma satelite measurements. 9 refs

  15. Very broad bandwidth klystron amplifiers

    Science.gov (United States)

    Faillon, G.; Egloff, G.; Farvet, C.

    Large surveillance radars use transmitters at peak power levels of around one MW and average levels of a few kW, and possibly several tens of kW, in S band, or even C band. In general, the amplification stage of these transmitters is a microwave power tube, frequently a klystron. Although designers often turn to klystrons because of their good peak and average power capabilities, they still see them as narrow band amplifiers, undoubtedly because of their resonant cavities which, at first sight, would seem highly selective. But, with the progress of recent years, it has now become quite feasible to use these tubes in installations requiring bandwidths in excess of 10 - 12 percent, and even 15 percent, at 1 MW peak for example, in S-band.

  16. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E

    2015-01-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)

  17. Radionuclide measurements, via different methodologies, as tool for geophysical studies on Mt. Etna

    Energy Technology Data Exchange (ETDEWEB)

    Morelli, D., E-mail: daniela.morelli@ct.infn.it [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Istituto Nazionale di Fisica Nucleare- Sezione di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Imme, G. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Istituto Nazionale di Fisica Nucleare- Sezione di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Altamore, I.; Cammisa, S. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Giammanco, S. [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, Piazza Roma, 2, I-95123 Catania (Italy); La Delfa, S. [Dipartimento di Scienze Geologiche, Universita di Catania, Corso Italia,57 I-95127 Catania (Italy); Mangano, G. [Dipartimento di Fisica e Astronomia, Universita di Catania, via S. Sofia, 64 I-95123 Catania (Italy); Neri, M. [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, Piazza Roma, 2, I-95123 Catania (Italy); Patane, G. [Dipartimento di Scienze Geologiche, Universita di Catania, Corso Italia,57 I-95127 Catania (Italy)

    2011-10-01

    Natural radioactivity measurements represent an interesting tool to study geodynamical events or soil geophysical characteristics. In this direction we carried out, in the last years, several radionuclide monitoring both in the volcanic and tectonic areas of the oriental Sicily. In particular we report in-soil Radon investigations, in a tectonic area, including both laboratory and in-site measurements, applying three different methodologies, based on both active and passive detection systems. The active detection devices consisted of solid-state silicon detectors equipped in portable systems for short-time measurements and for long-time monitoring. The passive technique consisted of solid-state nuclear track detectors (SSNTD), CR-39 type, and allowed integrated measurements. The performances of the three methodologies were compared according to different kinds of monitoring. In general the results obtained with the three methodologies seem in agreement with each other and reflect the tectonic settings of the investigated area.

  18. Development of a field measurement methodology for studying the thermal indoor environment in hybrid GEOTABS buildings

    DEFF Research Database (Denmark)

    Kazanci, Ongun Berk; Khovalyg, Dolaana; Olesen, Bjarne W.

    2018-01-01

    buildings. The three demonstration buildings were an office building in Luxembourg, an elderly care home in Belgium, and an elementary school in Czech Republic. All of these buildings are equipped with hybrid GEOTABS systems; however, they vary in size and function, which requires a unique measurement...... methodology for studying them. These buildings already have advanced Building Management Systems (BMS); however, a more detailed measurement plan was needed for the purposes of the project to document the current performance of these systems regarding thermal indoor environment and energy performance......, and to be able to document the improvements after the implementation of the MPC. This study provides the details of the developed field measurement methodology for each of these buildings to study the indoor environmental quality (IEQ) in details. The developed measurement methodology can be applied to other...

  19. Radionuclide measurements, via different methodologies, as tool for geophysical studies on Mt. Etna

    International Nuclear Information System (INIS)

    Morelli, D.; Imme, G.; Altamore, I.; Cammisa, S.; Giammanco, S.; La Delfa, S.; Mangano, G.; Neri, M.; Patane, G.

    2011-01-01

    Natural radioactivity measurements represent an interesting tool to study geodynamical events or soil geophysical characteristics. In this direction we carried out, in the last years, several radionuclide monitoring both in the volcanic and tectonic areas of the oriental Sicily. In particular we report in-soil Radon investigations, in a tectonic area, including both laboratory and in-site measurements, applying three different methodologies, based on both active and passive detection systems. The active detection devices consisted of solid-state silicon detectors equipped in portable systems for short-time measurements and for long-time monitoring. The passive technique consisted of solid-state nuclear track detectors (SSNTD), CR-39 type, and allowed integrated measurements. The performances of the three methodologies were compared according to different kinds of monitoring. In general the results obtained with the three methodologies seem in agreement with each other and reflect the tectonic settings of the investigated area.

  20. Methodology of clinical measures of healthcare quality delivered to patients with cardiovascular diseases

    Directory of Open Access Journals (Sweden)

    Posnenkova O.M.

    2014-03-01

    Full Text Available The results of implementation the methodology proposed by American Colleague of Cardiology and American Heart Association (ACC/AHA for development of Russian clinical quality measures for patients with arterial hypertension, coronary heart disease and chronic heart failure. Created quality measures cover the key elements of medical care influencing directly on clinical outcomes of treatment.

  1. Controlling Laser Plasma Instabilities Using Temporal Bandwidth

    Science.gov (United States)

    Tsung, Frank; Weaver, J.; Lehmberg, R.

    2016-10-01

    We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temporal bandwidth under conditions relevant to current and future experiments on the NIKE laser. Our simulations show that, for sufficiently large bandwidth (where the inverse bandwidth is comparable with the linear growth time), the saturation level, and the distribution of hot electrons, can be effected by the addition of temporal bandwidths (which can be accomplished in experiments using beam smoothing techniques such as ISI). We will quantify these effects and investigate higher dimensional effects such as laser speckles. This work is supported by DOE and NRL.

  2. Optimal filter bandwidth for pulse oximetry

    Science.gov (United States)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  3. The Methodology of Doppler-Derived Central Blood Flow Measurements in Newborn Infants

    Directory of Open Access Journals (Sweden)

    Koert A. de Waal

    2012-01-01

    Full Text Available Central blood flow (CBF measurements are measurements in and around the heart. It incorporates cardiac output, but also measurements of cardiac input and assessment of intra- and extracardiac shunts. CBF can be measured in the central circulation as right or left ventricular output (RVO or LVO and/or as cardiac input measured at the superior vena cava (SVC flow. Assessment of shunts incorporates evaluation of the ductus arteriosus and the foramen ovale. This paper describes the methodology of CBF measurements in newborn infants. It provides a brief overview of the evolution of Doppler ultrasound blood flow measurements, basic principles of Doppler ultrasound, and an overview of all used methodology in the literature. A general guide for interpretation and normal values with suggested cutoffs of CBFs are provided for clinical use.

  4. On semidefinite programming bounds for graph bandwidth

    NARCIS (Netherlands)

    de Klerk, E.; Nagy, M.; Sotirov, R.

    2013-01-01

    In this paper, we propose two new lower bounds on graph bandwidth and cyclic bandwidth based on semidefinite programming (SDP) relaxations of the quadratic assignment problem. We compare the new bounds with two other SDP bounds reported in [A. Blum, G. Konjevod, R. Ravi, and S. Vempala,

  5. Directing Traffic: Managing Internet Bandwidth Fairly

    Science.gov (United States)

    Paine, Thomas A.; Griggs, Tyler J.

    2008-01-01

    Educational institutions today face budgetary restraints and scarce resources, complicating the decision of how to allot bandwidth for campus network users. Additionally, campus concerns over peer-to-peer networking (specifically outbound Internet traffic) have increased because of bandwidth and copyright issues. In this article, the authors…

  6. 47 CFR 2.202 - Bandwidths.

    Science.gov (United States)

    2010-10-01

    ... three numerals and one letter. The letter occupies the position of the decimal point and represents the... quality desired Speech and music, M=4000, Bandwidth: 8000 Hz= 8 kHz 8K00A3E Sound broadcasting, single... desired Speech and music, M=4000, Bandwidth: 4000 Hz= 4 kHz 4K00R3E Sound broadcasting, single-sideband...

  7. Bandwidth-on-demand motion control

    NARCIS (Netherlands)

    Van Loon, S.J.L.M.; Hunnekens, B.G.B.; Simon, A.S.; van de Wouw, N.; Heemels, W.P.M.H.

    2018-01-01

    In this brief, we introduce a 'bandwidth-on-demand' variable-gain control (VGC) strategy that allows for a varying bandwidth of the feedback controller. The proposed VGC can achieve improved performance given time-varying, reference-dependent performance requirements compared with linear

  8. Measurement of the porosity of amorphous materials by gamma ray transmission methodology

    International Nuclear Information System (INIS)

    Pottker, Walmir Eno; Appoloni, Carlos Roberto

    2000-01-01

    In this work it is presented the measurement of the total porosity of TRe soil, Sandstone Berea rocks and porous ceramics samples. For the determination of the total porosity, the Arquimedes method (conventional) and the gamma ray transmission methodology were employed. The porosity measurement using the gamma methodology has a significant advantage respect to the conventional method due to the fast and non-destructive determination, and also for supplying results with a greater characterization in small scales, in relation to the heterogeneity of the porosity. The conventional methodology presents good results only for homogeneous samples. The experimental set up for the gamma ray transmission technique consisted of a 241 Am source (59,53 keV ), a NaI(Tl) scintillation detector, collimators, a XYZ micrometric table and standard gamma spectrometry electronics connected to a multichannel analyser. (author)

  9. Modeling the Effect of Bandwidth Allocation on Network Performance

    African Journals Online (AJOL)

    ... The proposed model showed improved performance for CDMA networks, but further increase in the bandwidth did not benefit the network; (iii) A reliability measure such as the spectral efficiency is therefore useful to redeem the limitation in (ii). Keywords: Coverage Capacity, CDMA, Mobile Network, Network Throughput ...

  10. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios

    NARCIS (Netherlands)

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; van Riet, M M J; Hendriks, W H

    2017-01-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE)

  11. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios

    NARCIS (Netherlands)

    Schaafstra, F.J.W.C.; Doorn, van D.A.; Schonewille, J.T.; Riet, van M.M.J.; Visser, P.; Blok, M.C.; Hendriks, W.H.

    2017-01-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE)

  12. Measuring hand hygiene compliance rates in different special care settings: a comparative study of methodologies

    Directory of Open Access Journals (Sweden)

    Thyago Pereira Magnus

    2015-04-01

    Conclusions: Hand hygiene compliance was reasonably high in these units, as measured by direct observation. However, a lack of correlation with results obtained by other methodologies brings into question the validity of direct observation results, and suggests that periodic audits using other methods may be needed.

  13. Translating Oral Health-Related Quality of Life Measures: Are There Alternative Methodologies?

    Science.gov (United States)

    Brondani, Mario; He, Sarah

    2013-01-01

    Translating existing sociodental indicators to another language involves a rigorous methodology, which can be costly. Free-of-charge online translator tools are available, but have not been evaluated in the context of research involving quality of life measures. To explore the value of using online translator tools to develop oral health-related…

  14. The Self-Concept. Volume 1, A Review of Methodological Considerations and Measuring Instruments. Revised Edition.

    Science.gov (United States)

    Wylie, Ruth C.

    This volume of the revised edition describes and evaluates measurement methods, research designs, and procedures which have been or might appropriately be used in self-concept research. Working from the perspective that self-concept or phenomenal personality theories can be scientifically investigated, methodological flaws and questionable…

  15. Covariance methodology applied to uncertainties in I-126 disintegration rate measurements

    International Nuclear Information System (INIS)

    Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.

    1996-01-01

    The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)

  16. Methodology of ionizing radiation measurement, from x-ray equipment, for radiation protection

    International Nuclear Information System (INIS)

    Caballero, Katia C.S.; Borges, Jose C.

    1996-01-01

    Most of X-rays beam used for diagnostic, are short exposure time (milliseconds). Exception are those used in fluoroscopy. measuring instruments (area monitors with ionizing chambers or Geiger tubes) used in hospitals and clinics, in general, have characteristic answer time not adequate to X-rays beams length in time. Our objective was to analyse instruments available commercially, to prepare a measuring methodology for direct and secondary beams, in order to evaluate protection barriers for beams used in diagnostic radiology installations. (author)

  17. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review

    Directory of Open Access Journals (Sweden)

    Johnston Marie

    2007-03-01

    Full Text Available Abstract Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model? and ii methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?. Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological

  18. The Influence of Measurement Methodology on the Accuracy of Electrical Waveform Distortion Analysis

    Science.gov (United States)

    Bartman, Jacek; Kwiatkowski, Bogdan

    2018-04-01

    The present paper covers a review of documents that specify measurement methods of voltage waveform distortion. It also presents measurement stages of waveform components that are uncommon in the classic fundamentals of electrotechnics and signal theory, including the creation process of groups and subgroups of harmonics and interharmonics. Moreover, the paper discusses selected distortion factors of periodic waveforms and presents analyses that compare the values of these distortion indices. The measurements were carried out in the cycle per cycle mode and the measurement methodology that was used complies with the IEC 61000-4-7 norm. The studies showed significant discrepancies between the values of analyzed parameters.

  19. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    DEFF Research Database (Denmark)

    Smit Andersen, Jonas; Lerer, Sara Maria; Backhaus, Antje

    2017-01-01

    of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses......Local management of rainwater using stormwater control measures (SCMs) is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way...... quantitative and statistical methods to select Characteristic Rain Events (CREs) for a range of frequent return periods: weekly, bi-weekly, monthly, bi-monthly, and a single rarer event occurring only every 1–10 years. The methodology for selecting CREs is flexible and can be adjusted to any climatic settings...

  20. Methodology to measure strains at high temperatures using electrical strain gages with free filaments

    International Nuclear Information System (INIS)

    Atanazio Filho, Nelson N.; Gomes, Paulo T. Vida; Scaldaferri, Denis H.B.; Silva, Luiz L. da; Rabello, Emerson G.; Mansur, Tanius R.

    2013-01-01

    An experimental methodology used for strains measuring at high temperatures is show in this work. In order to do the measurements, it was used electric strain gages with loose filaments attached to a stainless steel 304 beam with specific cements. The beam has triangular shape and a constant thickness, so the strain is the same along its length. Unless the beam surface be carefully prepared, the strain gage attachment is not efficient. The showed results are for temperatures ranging from 20 deg C to 300 deg C, but the experimental methodology could be used to measure strains at a temperature up to 900 deg C. Analytical calculations based on solid mechanics were used to verify the strain gage electrical installation and the measured strains. At a first moment, beam deformations as a temperature function were plotted. After that, beam deformations with different weighs were plotted as a temperature function. The results shown allowed concluding that the experimental methodology is trustable to measure strains at temperatures up to 300 deg C. (author)

  1. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    Science.gov (United States)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  2. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  3. Methodologies for measuring travelers' risk perception of infectious diseases: A systematic review.

    Science.gov (United States)

    Sridhar, Shruti; Régner, Isabelle; Brouqui, Philippe; Gautret, Philippe

    2016-01-01

    Numerous studies in the past have stressed the importance of travelers' psychology and perception in the implementation of preventive measures. The aim of this systematic review was to identify the methodologies used in studies reporting on travelers' risk perception of infectious diseases. A systematic search for relevant literature was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. There were 39 studies identified. In 35 of 39 studies, the methodology used was that of a knowledge, attitude and practice (KAP) survey based on questionnaires. One study used a combination of questionnaires and a visual psychometric measuring instrument called the 'pictorial representation of illness and self-measurement" or PRISM. One study used a self-representation model (SRM) method. Two studies measured psychosocial factors. Valuable information was obtained from KAP surveys showing an overall lack of knowledge among travelers about the most frequent travel-associated infections and associated preventive measures. This methodological approach however, is mainly descriptive, addressing knowledge, attitudes, and practices separately and lacking an examination of the interrelationships between these three components. Another limitation of the KAP method is underestimating psychosocial variables that have proved influential in health related behaviors, including perceived benefits and costs of preventive measures, perceived social pressure, perceived personal control, unrealistic optimism and risk propensity. Future risk perception studies in travel medicine should consider psychosocial variables with inferential and multivariate statistical analyses. The use of implicit measurements of attitudes could also provide new insights in the field of travelers' risk perception of travel-associated infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Personal dosimetry service of TECNATOM: measurement system and methodology of calibration

    International Nuclear Information System (INIS)

    Marchena, Paloma; Bravo, Borja

    2008-01-01

    Full text: The implementation of a new integrated and practical working tool called ALEDIN within the Personal Dosimetry Service (PDS) of TECNATOM, have harmonized the methodology for the counting acquisition, detector calibration and data analysis using a friendly Windows (registered mark) environment. The knowledge of this methodology, due to the fact that is the final product of a R and D project, will help the users and the Regulatory Body for a better understanding of the internal activity measurement in individuals, allowing a more precise error identification and correction, and improving the whole process of the internal dosimetry. The development and implementation of a new calibration system of the whole body counters using NaI (Tl) detectors and the utilization of a new humanoid anthropometric phantom, BOMAB type, with a uniform radioactive source distributions, allow a better energy and activity calibration for different counting geometries covering a wide range of gamma spectra from low energies, less than 100 keV to about 2000 keV for the high energies spectra. This new calibration methodology implied the development of an improved system for the determination of the isotopic activity. This new system has been integrated in a Windows (registered mark) environment, applicable for counting acquisition and data analysis in the whole body counters WBC in cross connection with the INDAC software, which allow the interpretation of the measured activity as committed effective dose following all the new ICRP recommendations and dosimetric models for internal dose and bioassay measurements. (author)

  5. Methodology for the assessment of measuring uncertainties of articulated arm coordinate measuring machines

    International Nuclear Information System (INIS)

    Romdhani, Fekria; Hennebelle, François; Ge, Min; Juillion, Patrick; Fontaine, Jean François; Coquet, Richard

    2014-01-01

    Articulated Arm Coordinate Measuring Machines (AACMMs) have gradually evolved and are increasingly used in mechanical industry. At present, measurement uncertainties relating to the use of these devices are not yet well quantified. The work carried out consists of determining the measurement uncertainties of a mechanical part by an AACMM. The studies aiming to develop a model of measurement uncertainty are based on the Monte Carlo method developed in Supplement 1 of the Guide to Expression of Uncertainty in Measurement [1] but also identifying and characterizing the main sources of uncertainty. A multi-level Monte Carlo approach principle has been developed which allows for characterizing the possible evolution of the AACMM during the measurement and quantifying in a second level the uncertainty on the considered measurand. The first Monte Carlo level is the most complex and is thus divided into three sub-levels, namely characterization on the positioning error of a point, estimation of calibration errors and evaluation of fluctuations of the ‘localization point’. The global method is thus presented and results of the first sub-level are particularly developed. The main sources of uncertainty, including AACMM deformations, are exposed. (paper)

  6. Atmospheric aerosol in an urban area: Comparison of measurement instruments and methodologies and pulmonary deposition assessment

    International Nuclear Information System (INIS)

    Berico, M.; Luciani, A.; Formignani, M.

    1996-07-01

    In March 1995 a measurement campaign of atmospheric aerosol in the Bologna urban area (Italy) was carried out. A transportable laboratory, set up by ENEA (Italian national Agency for New Technologies, Energy and the Environment) Environmental Department (Bologna), was utilized with instruments for measurement of atmospheric aerosol and meteorological parameters. The aim of this campaign was of dual purpose: to characterize aerosol in urban area and to compare different instruments and methodologies of measurements. Mass concentrations measurements, evaluated on a 23-hour period with total filter, PM10 dichotomous sampler and low pressure impactor (LPI Berner), have provided information respectively about total suspended particles, respirable fraction and granulometric parameters of aerosol. Eight meteorologic parameters, number concentration of submicromic fraction of aerosol and mass concentration of micromic fraction have been continually measured. Then, in a daytime period, several number granulometries of atmospheric aerosol have also been estimated by means of diffusion battery system. Results related to different measurement methodologies and granulometric characteristics of aerosol are presented here. Pulmonary deposition of atmospheric aerosol is finally calculated, using granulometries provided by LPI Brener and ICRP 66 human respiratory tract model

  7. A novel methodology for interpreting air quality measurements from urban streets using CFD modelling

    Science.gov (United States)

    Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming

    2011-09-01

    In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population

  8. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  9. A Methodology to Measure Synergy Among Energy-Efficiency Programs at the Program Participant Level

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E.

    2003-11-14

    This paper presents a methodology designed to measure synergy among energy-efficiency programs at the program participant level (e.g., households, firms). Three different definitions of synergy are provided: strong, moderate, and weak. Data to measure synergy can be collected through simple survey questions. Straightforward mathematical techniques can be used to estimate the three types of synergy and explore relative synergistic impacts of different subsets of programs. Empirical research is needed to test the concepts and methods and to establish quantitative expectations about synergistic relationships among programs. The market for new energy-efficient motors is the context used to illustrate all the concepts and methods in this paper.

  10. Measure a carbon impact methodology in line with a 2 degree scenario

    International Nuclear Information System (INIS)

    Coeslier, Manuel; Finidori, Esther; Smia, Ladislas

    2015-11-01

    Today, high expectations surround the measurement of carbon impact. Voluntary initiatives and - little by little - legislation push institutional investors to consider the impact that financial portfolios have on the climate and energy transition. However, current methods (of carbon footprint measurement) are not adequate to determine an investment portfolio's contribution to these issues. Current approaches, which do not take a life-cycle vision of carbon foot-printing, have the particular flaw of not accounting for emissions related to companies' products and services. The impact of these products and services on the climate is, however, crucial in many sectors - whether positively in the case of renewable energy and energy efficiency solutions, or negatively in the case of fossil fuels. Following this observation, Mirova and Carbone 4 decided to create a partnership dedicated to developing a new methodology capable of providing a carbon measurement that is aligned with the issues of energy transition: Carbon Impact Analytics (CIA). The CIA methodology focuses primarily on three indicators: - A measure of emissions 'induced' by a company's activity from a life-cycle approach, taking into account direct emissions as well as emissions from product suppliers; - A measure of the emissions which are 'avoided' due to efficiency efforts or deployment of 'low-carbon' solutions; - An overall evaluation that takes into account, in addition to carbon measurement, further information on the company's evolution and the type of capital or R and D expenditures. For these evaluations, the methodology employs a bottom-up approach in which each company is examined individually according to an evaluation framework adapted to each sector. Particular scrutiny is devoted to companies with a significant climate impact: energy producers, carbon-intensive sectors (industry, construction, transport), and providers of low-carbon equipment and solutions. Evaluations are then aggregated at

  11. High bandwidth beam current monitor

    International Nuclear Information System (INIS)

    Baltrusaitis, R.M.; Ekdahl, C.A.; Cooper, R.G.; Peterson, E.; Warn, C.E.

    1993-01-01

    A stripline directional coupler beam current monitor capable of measuring the time structure of a 30-ps electron beam bunch has been developed. The time response performance of the monitor compares very well with Cherenkov light produced in quartz by the electron beam. The four-pickup monitor is now used on a routine basis for measuring the beam duration, tuning for optimized beam bunching, and centering the bunch in the beam pipe

  12. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    DEFF Research Database (Denmark)

    Smit Andersen, Jonas; Lerer, Sara Maria; Backhaus, Antje

    2017-01-01

    Local management of rainwater using stormwater control measures (SCMs) is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way...... of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses......; here we show its use for Danish conditions. We illustrate with a case study how CREs can be used in combination with a simple hydrological model to visualize where, how deep and for how long water is visible in a landscape designed to manage rainwater....

  13. How Much Can Non-industry Standard Measurement Methodologies Benefit Methane Reduction Programs?

    Science.gov (United States)

    Risk, D. A.; O'Connell, L.; Atherton, E.

    2017-12-01

    In recent years, energy sector methane emissions have been recorded in large part by applying modern non-industry-standard techniques. Industry may lack the regulatory flexibility to use such techniques, or in some cases may not understand the possible associated economic advantage. As progressive jurisdictions move from estimation and towards routine measurement, the research community should provide guidance to help regulators and companies measure more effectively, and economically if possible. In this study, we outline a modelling experiment in which we explore the integration of non-industry-standard measurement techniques as part of a generalized compliance measurement program. The study was not intended to be exhaustive, or to recommend particular combinations, but instead to explore the inter-relationships between methodologies, development type, compliance practice. We first defined the role, applicable scale, detection limits, working distances, and approximate deployment cost of several measurement methodologies. We then considered a variety of development types differing mainly in footprint, density, and emissions "profile". Using a Monte Carlo approach, we evaluated the effect of these various factors on the cost and confidence of the compliance measurement program. We found that when added individually, some of the research techniques were indeed able to deliver an improvement in cost and/or confidence when used alongside industry-standard Optical Gas Imaging. When applied in combination, the ideal fraction of each measurement technique depended on development type, emission profile, and whether confidence or cost was more important. Results suggest that measurement cost and confidence could be improved if energy companies exploited a wider range of measurement techniques, and in a manner tailored to each development. In the short-term, combining clear scientific guidance with economic information could benefit immediate mitigation efforts over

  14. Presentation of a methodology for measuring social acceptance of three hydrogen storage technologies and preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Noirot, I.; Bigay, C. N.

    2005-07-01

    Technologies (MASIT). This methodology takes into account the following points of view : technical, economical, environmental, social and industrial/technological risks. MASIT is the methodology chosen to assess the hydrogen storage technologies developed during the StorHy project. With respect to the methodology, each point of view is defined by several criteria selected with car manufacturers and experts of each domain. Then, each criterion is quantified with the contribution of all partners involved in the project. While technical, economical and environmental criteria are quite objectives (easy to define and measure), the social dimension is subjective and has also a large variability as it depends on perception and measurement at an individual human level. So, the methodological work consists in the improvement of the MASIT methodology from the social point of view. This methodology is applicable for comparison of any other technologies and it has been implemented here to compare the storage technologies developed in the StorHy project for each application selected in the study (light vehicles, fleet vehicles, buses). (Author)

  15. Methodology of heat transfer and flow resistance measurement for matrices of rotating regenerative heat exchangers

    Directory of Open Access Journals (Sweden)

    Butrymowicz Dariusz

    2016-09-01

    Full Text Available The theoretical basis for the indirect measurement approach of mean heat transfer coefficient for the packed bed based on the modified single blow technique was presented and discussed in the paper. The methodology of this measurement approach dedicated to the matrix of the rotating regenerative gas heater was discussed in detail. The testing stand consisted of a dedicated experimental tunnel with auxiliary equipment and a measurement system are presented. Selected experimental results are presented and discussed for selected types of matrices of regenerative air preheaters for the wide range of Reynolds number of gas. The agreement between the theoretically predicted and measured temperature profiles was demonstrated. The exemplary dimensionless relationships between Colburn heat transfer factor, Darcy flow resistance factor and Reynolds number were presented for the investigated matrices of the regenerative gas heater.

  16. Radioactivity measurement of the liquid effluents of two university hospital methodology, problems arising

    International Nuclear Information System (INIS)

    Basse-Cathalinat, B.; Barthe, N.; Chatti, K.; Ducassou, D.

    2005-01-01

    The authors present methodology used to measure the radioactivity of the effluents at the output of two services of Nuclear medicine located in two Hospital complexes of the Area of Bordeaux. These measures are intended to answer at the requests of circular DGS/DHOS no 2001/323 of the Ministry for Employment and Solidarity. The selected method is more powerful since it is based on the use of a whole of spectrometry to very low background noise. These devices of measurements make it possible to take into account all the isotopes coming from a service of Nuclear medicine. The authors are conscious that of such measurements cannot be considered in all the services of Nuclear medicine. Other technical articles will specify simpler methods allowing a satisfactory management of the radioactive wastes. (author)

  17. Measurements of integrated components' parameters versus irradiation doses gamma radiation (60Co) dosimetry-methodology-tests

    International Nuclear Information System (INIS)

    Fuan, J.

    1991-01-01

    This paper describes the methodology used for the irradiation of the integrated components and the measurements of their parameters, using Quality Insurance of dosimetry: - Measurement of the integrated dose using the competences of the Laboratoire Central des Industries Electriques (LCIE): - Measurement of irradiation dose versus source/component distance, using a calibrated equipment. - Use of ALANINE dosimeters, placed on the support of the irradiated components. - Assembly and polarization of components during the irradiations. Selection of the irradiator. - Measurement of the irradiated components's parameters, using the competences of the societies: - GenRad: GR130 tests equipement placed in the DEIN/SIR-CEN SACLAY. - Laboratoire Central des Industries Electriques (LCIE): GR125 tests equipment and this associated programmes test [fr

  18. A Methodology for Measuring Microplastic Transport in Large or Medium Rivers

    Directory of Open Access Journals (Sweden)

    Marcel Liedermann

    2018-04-01

    Full Text Available Plastic waste as a persistent contaminant of our environment is a matter of increasing concern due to the largely unknown long-term effects on biota. Although freshwater systems are known to be the transport paths of plastic debris to the ocean, most research has been focused on marine environments. In recent years, freshwater studies have advanced rapidly, but they rarely address the spatial distribution of plastic debris in the water column. A methodology for measuring microplastic transport at various depths that is applicable to medium and large rivers is needed. We present a new methodology offering the possibility of measuring microplastic transport at different depths of verticals that are distributed within a profile. The net-based device is robust and can be applied at high flow velocities and discharges. Nets with different sizes (41 µm, 250 µm, and 500 µm are exposed in three different depths of the water column. The methodology was tested in the Austrian Danube River, showing a high heterogeneity of microplastic concentrations within one cross section. Due to turbulent mixing, the different densities of the polymers, aggregation, and the growth of biofilms, plastic transport cannot be limited to the surface layer of a river, and must be examined within the whole water column as for suspended sediments. These results imply that multipoint measurements are required for obtaining the spatial distribution of plastic concentration and are therefore a prerequisite for calculating the passing transport. The analysis of filtration efficiency and side-by-side measurements with different mesh sizes showed that 500 µm nets led to optimal results.

  19. Design and fabrication of bandwidth tunable HTS transmit filter using {pi}-shaped waveguides

    Energy Technology Data Exchange (ETDEWEB)

    Sekiya, N., E-mail: nsekiya@yamanashi.ac.j [Department of Electrical Engineering, Yamanashi University, Nakagawa-Sekiya Laboratory, 4-3-11 Takeda, Kofu 400-8511 (Japan); Harada, H.; Nakagawa, Y. [Department of Electrical Engineering, Yamanashi University, Nakagawa-Sekiya Laboratory, 4-3-11 Takeda, Kofu 400-8511 (Japan); Ono, S.; Ohshima, S. [Yamagata University, 4-3-16 Johnan, Yonezawa 992-8510 (Japan)

    2010-11-01

    We have developed a method for tuning the bandwidth of a high-temperature superconducting (HTS) microstrip filter. Several {pi}-shaped waveguides are placed between the resonators, and the bandwidth is tuned in discrete steps by changing the switch states of the waveguides, which changes the coupling coefficient between the resonators. The filter contains 3-pole half-wavelength straight-line resonators and two {pi}-shaped waveguides for bandwidth tuning. It also has several electrical pads distributed around the feed lines for trimming after tuning. The filter was fabricated by depositing YBa{sub 2}Cu{sub 3}O{sub 7} thin film on an MgO substrate and has a measured center frequency of 5.17 GHz and bandwidth of 220 MHz. Use of the {pi}-shaped waveguides to adjust the coupling coefficients and the electrical pads to adjust the external quality factors resulted in 80-MHz bandwidth tuning without increased insertion loss.

  20. Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches

    KAUST Repository

    Kuźniar, Maciej; Pereší ni, Peter; Kostić, Dejan; Canini, Marco

    2018-01-01

    and performance characteristics is essential for ensuring successful and safe deployments.We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a

  1. Bandwidth Management in Wireless Home Networks for IPTV Solutions

    Directory of Open Access Journals (Sweden)

    Tamás Jursonovics

    2013-01-01

    Full Text Available The optimal allocation of the retransmission bandwidth is essential for IPTV service providers to ensure maximal service quality. This paper highlights the relevance of the wireless transport in today’s IPTV solution and discusses how this new media affects the existing broadcast technologies. A new Markovian channel model is developed to address the optimization issues of the retransmission throughput, and a new method is presented which is evaluated by empirical measurements followed by mathematical analysis.

  2. Bandwidth Study of the Microwave Reflectors with Rectangular Corrugations

    Science.gov (United States)

    Zhang, Liang; He, Wenlong; Donaldson, Craig R.; Cross, Adrian W.

    2016-09-01

    The mode-selective microwave reflector with periodic rectangular corrugations in the inner surface of a circular metallic waveguide is studied in this paper. The relations between the bandwidth and reflection coefficient for different numbers of corrugation sections were studied through a global optimization method. Two types of reflectors were investigated. One does not consider the phase response and the other does. Both types of broadband reflectors operating at W-band were machined and measured to verify the numerical simulations.

  3. Characteristic Rain Events: A Methodology for Improving the Amenity Value of Stormwater Control Measures

    Directory of Open Access Journals (Sweden)

    Jonas Smit Andersen

    2017-10-01

    Full Text Available Local management of rainwater using stormwater control measures (SCMs is gaining increased attention as a sustainable alternative and supplement to traditional sewer systems. Besides offering added utility values, many SCMs also offer a great potential for added amenity values. One way of achieving amenity value is to stage the rainwater and thus bring it to the attention of the public. We present here a methodology for creating a selection of rain events that can help bridge between engineering and landscape architecture when dealing with staging of rainwater. The methodology uses quantitative and statistical methods to select Characteristic Rain Events (CREs for a range of frequent return periods: weekly, bi-weekly, monthly, bi-monthly, and a single rarer event occurring only every 1–10 years. The methodology for selecting CREs is flexible and can be adjusted to any climatic settings; here we show its use for Danish conditions. We illustrate with a case study how CREs can be used in combination with a simple hydrological model to visualize where, how deep and for how long water is visible in a landscape designed to manage rainwater.

  4. Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches

    KAUST Repository

    Kuźniar, Maciej

    2018-02-15

    Software-Defined Networking (SDN) and OpenFlow are actively being standardized and deployed. These deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful and safe deployments.We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a stream of rule updates, while relying on both observing the control plane view as reported by the switch and probing the data plane state to determine switch characteristics by comparing these views. We measure, report and explain the performance characteristics of flow table updates in six hardware OpenFlow switches. Our results describing rule update rates can help SDN designers make their controllers efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.

  5. A general centroid determination methodology, with application to multilayer dielectric structures and thermally stimulated current measurements

    International Nuclear Information System (INIS)

    Miller, S.L.; Fleetwood, D.M.; McWhorter, P.J.; Reber, R.A. Jr.; Murray, J.R.

    1993-01-01

    A general methodology is developed to experimentally characterize the spatial distribution of occupied traps in dielectric films on a semiconductor. The effects of parasitics such as leakage, charge transport through more than one interface, and interface trap charge are quantitatively addressed. Charge transport with contributions from multiple charge species is rigorously treated. The methodology is independent of the charge transport mechanism(s), and is directly applicable to multilayer dielectric structures. The centroid capacitance, rather than the centroid itself, is introduced as the fundamental quantity that permits the generic analysis of multilayer structures. In particular, the form of many equations describing stacked dielectric structures becomes independent of the number of layers comprising the stack if they are expressed in terms of the centroid capacitance and/or the flatband voltage. The experimental methodology is illustrated with an application using thermally stimulated current (TSC) measurements. The centroid of changes (via thermal emission) in the amount of trapped charge was determined for two different samples of a triple-layer dielectric structure. A direct consequence of the TSC analyses is the rigorous proof that changes in interface trap charge can contribute, though typically not significantly, to thermally stimulated current

  6. Bandwidth extension of speech using perceptual criteria

    CERN Document Server

    Berisha, Visar; Liss, Julie

    2013-01-01

    Bandwidth extension of speech is used in the International Telecommunication Union G.729.1 standard in which the narrowband bitstream is combined with quantized high-band parameters. Although this system produces high-quality wideband speech, the additional bits used to represent the high band can be further reduced. In addition to the algorithm used in the G.729.1 standard, bandwidth extension methods based on spectrum prediction have also been proposed. Although these algorithms do not require additional bits, they perform poorly when the correlation between the low and the high band is weak. In this book, two wideband speech coding algorithms that rely on bandwidth extension are developed. The algorithms operate as wrappers around existing narrowband compression schemes. More specifically, in these algorithms, the low band is encoded using an existing toll-quality narrowband system, whereas the high band is generated using the proposed extension techniques. The first method relies only on transmitted high-...

  7. Bandwidth Assessment for MultiRotor UAVs

    Directory of Open Access Journals (Sweden)

    Ferrarese Gastone

    2017-06-01

    Full Text Available This paper is a technical note about the theoretical evaluation of the bandwidth of multirotor helicopters. Starting from a mathematical linear model of the dynamics of a multirotor aircraft, the transfer functions of the state variables that deeply affect the stability characteristics of the aircraft are obtained. From these transfer functions, the frequency response analysis of the system is effected. After this analysis, the bandwidth of the system is defined. This result is immediately utilized for the design of discrete PID controllers for hovering flight stabilization. Numeric simulations are shown to demonstrate that the knowledge of the bandwidth is a valid aid in the design of flight control systems of these machines.

  8. Teleoperation over low bandwidth communication links

    International Nuclear Information System (INIS)

    Fryer, R.J.; Mair, G.M.; Clark, N.; Heng, J.

    1996-01-01

    Teleoperation is well established for many areas of hazardous environment working. Where such environments are well structured and contained, such as within a working plant, communications bandwidths need not be a constraining factor. However where the worksite is remote, large, poorly structured or damaged communications rapidly become a critical factor in the efficient deployment and use of teleoperation equipment. The paper justifies and describes means which we are exploring to reduce the required communications bandwidth for teleoperation whist retaining full functionality. Techniques involved include incorporation of local intelligence at the worksite, with bandwidth devoted to high-level up-link control signals and down-link feedback, and the use of highly compressed video feeding 'virtual reality type' HMDs to provide maximum system transparency for the operator. The work is drawing on previous experience with an 'anthropomorphic robot heat' for telepresence work, and proprietary algorithms capable of compressing full colour video to standard telephone modem data rates. (Author)

  9. Bandwidth Reservations in Home Networks

    DEFF Research Database (Denmark)

    Nelis, Jelle; Verslype, Dieter; Develder, Chris

    2010-01-01

    In order for service providers to provide their users high quality services in the home network, Quality of Service (QoS) provisioning is needed to protect premium services. In this paper, we describe how a Universal Plug-and-Play (UPnP) based home network architecture solves this problem...... in a heterogeneous home network. We outline how it both relieves the end user from troublesome configuration and still offers control to the service provider. We particularly present performance assessment results for UPnP-QoS v3, based on a fully operational experimental implementation. The quantitative measurement...

  10. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements

    Science.gov (United States)

    do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-01-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID

  11. A methodology for performing virtual measurements in a nuclear reactor system

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Uhrig, R.E.; Tsoukalas, L.H.

    1992-01-01

    A novel methodology is presented for monitoring nonphysically measurable variables in an experimental nuclear reactor. It is based on the employment of artificial neural networks to generate fuzzy values. Neural networks map spatiotemporal information (in the form of time series) to algebraically defined membership functions. The entire process can be thought of as a virtual measurement. Through such virtual measurements the values of nondirectly monitored parameters with operational significance, e.g., transient-type, valve-position, or performance, can be determined. Generating membership functions is a crucial step in the development and practical utilization of fuzzy reasoning, a computational approach that offers the advantage of describing the state of the system in a condensed, linguistic form, convenient for monitoring, diagnostics, and control algorithms

  12. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  13. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    Science.gov (United States)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  14. Composite GPS Patch Antenna for the AR Bandwidth Enhancement

    Directory of Open Access Journals (Sweden)

    Minkil Park

    2016-01-01

    Full Text Available A composite Global Positioning System (GPS patch antenna with a quadrature 3 dB hybrid coupler was designed and implemented for working RHCP and had a broadband axial ratio (AR bandwidth. We designed two patches as a FR-4 patch and 1.5 mm thickness thin ceramic patch with a quadrature 3 dB hybrid coupler. A CP radiation pattern was achieved, and the AR bandwidth improved by incorporating a quadrature 3 dB hybrid coupler feed structure in a micro-strip patch antenna. SMD by chip elements was applied to the quadrature 3 dB hybrid coupler. For the composite FR-4 and ceramic patch antennas, the VSWR measurement showed a 2 : 1 ratio over the entire design band, and the 3 dB AR bandwidth was 295 and 580 MHz for the FR-4 patch and ceramic patch antennas, respectively. The antenna gains for the composite FR-4 and ceramic patch antennas were measured as 1.36–2.75 and 1.47–2.71 dBi with 15.11–25.3% and 19.25–28.45% efficiency, respectively.

  15. Causality analysis in business performance measurement system using system dynamics methodology

    Science.gov (United States)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  16. Measurement and verification of low income energy efficiency programs in Brazil: Methodological challenges

    Energy Technology Data Exchange (ETDEWEB)

    Martino Jannuzzi, Gilberto De; Rodrigues da Silva, Ana Lucia; Melo, Conrado Augustus de; Paccola, Jose Angelo; Dourado Maia Gomes, Rodolfo (State Univ. of Campinas, International Energy Initiative (Brazil))

    2009-07-01

    Electric utilities in Brazil are investing about 80 million dollars annually in low-income energy efficiency programs, about half of their total compulsory investments in end-use efficiency programs under current regulation. Since 2007 the regulator has enforced the need to provide evaluation plans for the programs delivered. This paper presents the measurement and verification (MandV) methodology that has been developed to accommodate the characteristics of lighting and refrigerator programs that have been introduced in the Brazilian urban and peri-urban slums. A combination of household surveys, end-use measurements and metering at the transformers and grid levels were performed before and after the program implementation. The methodology has to accommodate the dynamics, housing, electrical wiring and connections of the population as well as their ability to pay for the electricity and program participation. Results obtained in slums in Rio de Janeiro are presented. Impacts of the programs were evaluated in energy terms to households and utilities. Feedback from the evaluations performed also permitted the improvement in the design of new programs for low-income households.

  17. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    International Nuclear Information System (INIS)

    2013-01-01

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results

  18. National and Regional Surveys of Radon Concentration in Dwellings. Review of Methodology and Measurement Techniques

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-12-15

    Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, the availability of tested and validated sampling and analytical procedures is an extremely important tool for carrying out such measurements. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available to both the analyst and the customer for reference. In the specific case of radon surveys, it is very important to design a survey in such a way as to obtain results that can reasonably be considered representative of a population. Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the measurement of radionuclides in terrestrial environmental samples. The development of radon measurement procedures for national and regional surveys started with the collection and review of more than 160 relevant scientific papers. On the basis of this review, this publication summarizes the methodology and the measurement techniques suitable for a population representative national or regional survey on radon concentration in the indoor air of dwellings. The main elements of the survey design are described and discussed, such as the sampling scheme, the protocols, the questionnaire and the data analysis, with particular attention to the potential biases that can affect the representativeness of the results. Moreover, the main measurement techniques suitable for national surveys on indoor radon are reviewed, with particular attention to the elements that can affect the precision and accuracy of the results.

  19. Digital demodulator for wide bandwidth SAR

    DEFF Research Database (Denmark)

    Jørgensen, Jørn Hjelm

    2000-01-01

    A novel approach to the design of efficient digital quadrature demodulators for wide bandwidth SAR systems is described. Efficiency is obtained by setting the intermediate frequency to 1/4 the ADC sampling frequency. One channel is made filter-free by synchronizing the local oscillator...

  20. Improved space bandwidth product in image upconversion

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Pedersen, Christian; Tidemand-Lichtenberg, Peter

    2012-01-01

    We present a technique increasing the space bandwidth product of a nonlinear image upconversion process used for spectral imaging. The technique exploits the strong dependency of the phase-matching condition in sum frequency generation (SFG) on the angle of propagation of the interacting fields...

  1. An ultrasonic methodology for muscle cross section measurement of support space flight

    Science.gov (United States)

    Hatfield, Thomas R.; Klaus, David M.; Simske, Steven J.

    2004-09-01

    The number one priority for any manned space mission is the health and safety of its crew. The study of the short and long term physiological effects on humans is paramount to ensuring crew health and mission success. One of the challenges associated in studying the physiological effects of space flight on humans, such as loss of bone and muscle mass, has been that of readily attaining the data needed to characterize the changes. The small sampling size of astronauts, together with the fact that most physiological data collection tends to be rather tedious, continues to hinder elucidation of the underlying mechanisms responsible for the observed changes that occur in space. Better characterization of the muscle loss experienced by astronauts requires that new technologies be implemented. To this end, we have begun to validate a 360° ultrasonic scanning methodology for muscle measurements and have performed empirical sampling of a limb surrogate for comparison. Ultrasonic wave propagation was simulated using 144 stations of rotated arm and calf MRI images. These simulations were intended to provide a preliminary check of the scanning methodology and data analysis before its implementation with hardware. Pulse-echo waveforms were processed for each rotation station to characterize fat, muscle, bone, and limb boundary interfaces. The percentage error between MRI reference values and calculated muscle areas, as determined from reflection points for calf and arm cross sections, was -2.179% and +2.129%, respectively. These successful simulations suggest that ultrasound pulse scanning can be used to effectively determine limb cross-sectional areas. Cross-sectional images of a limb surrogate were then used to simulate signal measurements at several rotation angles, with ultrasonic pulse-echo sampling performed experimentally at the same stations on the actual limb surrogate to corroborate the results. The objective of the surrogate sampling was to compare the signal

  2. Ultra-high bandwidth quantum secured data transmission

    Science.gov (United States)

    Dynes, James F.; Tam, Winci W.-S.; Plews, Alan; Fröhlich, Bernd; Sharpe, Andrew W.; Lucamarini, Marco; Yuan, Zhiliang; Radig, Christian; Straw, Andrew; Edwards, Tim; Shields, Andrew J.

    2016-10-01

    Quantum key distribution (QKD) provides an attractive means for securing communications in optical fibre networks. However, deployment of the technology has been hampered by the frequent need for dedicated dark fibres to segregate the very weak quantum signals from conventional traffic. Up until now the coexistence of QKD with data has been limited to bandwidths that are orders of magnitude below those commonly employed in fibre optic communication networks. Using an optimised wavelength divisional multiplexing scheme, we transport QKD and the prevalent 100 Gb/s data format in the forward direction over the same fibre for the first time. We show a full quantum encryption system operating with a bandwidth of 200 Gb/s over a 100 km fibre. Exploring the ultimate limits of the technology by experimental measurements of the Raman noise, we demonstrate it is feasible to combine QKD with 10 Tb/s of data over a 50 km link. These results suggest it will be possible to integrate QKD and other quantum photonic technologies into high bandwidth data communication infrastructures, thereby allowing their widespread deployment.

  3. Prediction of work metabolism from heart rate measurements in forest work: some practical methodological issues.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Auger, Isabelle; Leone, Mario

    2015-01-01

    Individual heart rate (HR) to workload relationships were determined using 93 submaximal step-tests administered to 26 healthy participants attending physical activities in a university training centre (laboratory study) and 41 experienced forest workers (field study). Predicted maximum aerobic capacity (MAC) was compared to measured MAC from a maximal treadmill test (laboratory study) to test the effect of two age-predicted maximum HR Equations (220-age and 207-0.7 × age) and two clothing insulation levels (0.4 and 0.91 clo) during the step-test. Work metabolism (WM) estimated from forest work HR was compared against concurrent work V̇O2 measurements while taking into account the HR thermal component. Results show that MAC and WM can be accurately predicted from work HR measurements and simple regression models developed in this study (1% group mean prediction bias and up to 25% expected prediction bias for a single individual). Clothing insulation had no impact on predicted MAC nor age-predicted maximum HR equations. Practitioner summary: This study sheds light on four practical methodological issues faced by practitioners regarding the use of HR methodology to assess WM in actual work environments. More specifically, the effect of wearing work clothes and the use of two different maximum HR prediction equations on the ability of a submaximal step-test to assess MAC are examined, as well as the accuracy of using an individual's step-test HR to workload relationship to predict WM from HR data collected during actual work in the presence of thermal stress.

  4. Improving inferior vena cava filter retrieval rates with the define, measure, analyze, improve, control methodology.

    Science.gov (United States)

    Sutphin, Patrick D; Reis, Stephen P; McKune, Angie; Ravanzo, Maria; Kalva, Sanjeeva P; Pillai, Anil K

    2015-04-01

    To design a sustainable process to improve optional inferior vena cava (IVC) filter retrieval rates based on the Define, Measure, Analyze, Improve, Control (DMAIC) methodology of the Six Sigma process improvement paradigm. DMAIC, an acronym for Define, Measure, Analyze, Improve, and Control, was employed to design and implement a quality improvement project to increase IVC filter retrieval rates at a tertiary academic hospital. Retrievable IVC filters were placed in 139 patients over a 2-year period. The baseline IVC filter retrieval rate (n = 51) was reviewed through a retrospective analysis, and two strategies were devised to improve the filter retrieval rate: (a) mailing of letters to clinicians and patients for patients who had filters placed within 8 months of implementation of the project (n = 43) and (b) a prospective automated scheduling of a clinic visit at 4 weeks after filter placement for all new patients (n = 45). The effectiveness of these strategies was assessed by measuring the filter retrieval rates and estimated increase in revenue to interventional radiology. IVC filter retrieval rates increased from a baseline of 8% to 40% with the mailing of letters and to 52% with the automated scheduling of a clinic visit 4 weeks after IVC filter placement. The estimated revenue per 100 IVC filters placed increased from $2,249 to $10,518 with the mailing of letters and to $17,022 with the automated scheduling of a clinic visit. Using the DMAIC methodology, a simple and sustainable quality improvement intervention was devised that markedly improved IVC filter retrieval rates in eligible patients. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  5. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  6. Measuring domestic water use: a systematic review of methodologies that measure unmetered water use in low-income settings.

    Science.gov (United States)

    Tamason, Charlotte C; Bessias, Sophia; Villada, Adriana; Tulsiani, Suhella M; Ensink, Jeroen H J; Gurley, Emily S; Mackie Jensen, Peter Kjaer

    2016-11-01

    To present a systematic review of methods for measuring domestic water use in settings where water meters cannot be used. We systematically searched EMBASE, PubMed, Water Intelligence Online, Water Engineering and Development Center, IEEExplore, Scielo, and Science Direct databases for articles that reported methodologies for measuring water use at the household level where water metering infrastructure was absent or incomplete. A narrative review explored similarities and differences between the included studies and provide recommendations for future research in water use. A total of 21 studies were included in the review. Methods ranged from single-day to 14-consecutive-day visits, and water use recall ranged from 12 h to 7 days. Data were collected using questionnaires, observations or both. Many studies only collected information on water that was carried into the household, and some failed to mention whether water was used outside the home. Water use in the selected studies was found to range from two to 113 l per capita per day. No standardised methods for measuring unmetered water use were found, which brings into question the validity and comparability of studies that have measured unmetered water use. In future studies, it will be essential to define all components that make up water use and determine how they will be measured. A pre-study that involves observations and direct measurements during water collection periods (these will have to be determined through questioning) should be used to determine optimal methods for obtaining water use information in a survey. Day-to-day and seasonal variation should be included. A study that investigates water use recall is warranted to further develop standardised methods to measure water use; in the meantime, water use recall should be limited to 24 h or fewer. © 2016 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  7. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures

    International Nuclear Information System (INIS)

    Karakitsios, Spyros P.; Sarigiannis, Dimosthenis A.; Gotti, Alberto; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2013-01-01

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose–response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several “what if” scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1 · 10 −5 compared to 23.4 · 10 −5 for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10–20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support. - Highlights

  8. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures

    Energy Technology Data Exchange (ETDEWEB)

    Karakitsios, Spyros P. [Aristotle University of Thessaloniki, Department of Chemical Engineering, 54124 Thessaloniki (Greece); Sarigiannis, Dimosthenis A., E-mail: denis@eng.auth.gr [Aristotle University of Thessaloniki, Department of Chemical Engineering, 54124 Thessaloniki (Greece); Centre for Research and Technology Hellas (CE.R.T.H.), 57001, Thessaloniki (Greece); Gotti, Alberto [Centre for Research and Technology Hellas (CE.R.T.H.), 57001, Thessaloniki (Greece); Kassomenos, Pavlos A. [University of Ioannina, Department of Physics, Laboratory of Meteorology, GR-45110 Ioannina (Greece); Pilidis, Georgios A. [University of Ioannina, Department of Biological Appl. and Technologies, GR-45110 Ioannina (Greece)

    2013-01-15

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose–response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several “what if” scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1 · 10{sup −5} compared to 23.4 · 10{sup −5} for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10–20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support

  9. Improving microwave antenna gain and bandwidth with phase compensation metasurface

    Directory of Open Access Journals (Sweden)

    Ke Chen

    2015-06-01

    Full Text Available Metasurface, as a planar version of artificial metamaterial, provide an effective way to manipulate electromagnetic wave propagation. Here, we present a transparent metasurface for compensating the out-of-phase radiation from a microstrip patch antenna to improve its radiation gain and bandwidth. Based on the equivalence principle of Huygens’ surface, we propose metasurface composed of both inductive and capacitive resonant elements which could produce high transmission with variable phase characteristics. Such metasurface mounted on a patch antenna can transform the spherical-like phase profile generated from the patch into an in-phase planar one. A prototype antenna has been fabricated and validated the squeezed radiation pattern with suppressed sidelobes as well as enhanced impedance bandwidth due to strong near-field coupling. As operating at around 5.7 GHz, the proposed antenna may have potential application in wireless communication systems especially for point-to-point data transmission. It is believed that the design methodology could also be scaled to other frequency bands such as millimeter or terahertz wave.

  10. A methodology for the measure of secondary homes tourist flows at municipal level

    Directory of Open Access Journals (Sweden)

    Andrea Guizzardi

    2007-10-01

    Full Text Available The present public statistical system does not provide information concerning second houses touristic flows at sub-regional level. The lack limits local administrations' capabilities to take decisions about either: environmental, territorial and productive development, as well as regional governments in fair allocation of public financing. In the work, this information lack is overcome by proposing an indirect estimation methodology. Municipalities electric power consumption is proposed as an indicator of the stays on secondary homes. The indicator is connected to tourism flows considering both measurement errors and factors, modifying the local power demand. The application to Emilia-Romagna regional case allow to verify results’ coherence with officials statistics, as weel as to assess municipalities’ tourist vocation.

  11. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  12. Code coverage measurement methodology for MMI software of safety-class I and C system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Hyung; Jung, Beom Young; Choi, Seok Joo [Suresofttech, Seoul (Korea, Republic of)

    2016-10-15

    MMI (Man-Machine Interface) software of the safety instrumentation and control system used in nuclear power plants carry out an important functions, such as displaying and transmitting the commend to another system, and change setpoints the safety-related information. Yet, this has been recognized reliability of the MMI software plays an important role in enhancing nuclear power plants are operating, regulatory standards have been strengthened with it. Strengthening of regulatory standards has affected even perform software testing soon, and accordingly, the current regulatory require the measurement of code coverage with legal standard. In this paper, it poses a problem of the conventional method used for measuring the above-mentioned code coverage, presents a new coverage measuring method for solving the exposed problems. In this paper, we checked the problems such as limit and the low efficiency of the existing test coverage measuring method on the MMI software using in nuclear power instrumentation and control systems, and it proposed a new test coverage measuring method as a solution for this. If you apply a new method of Top-Down approach, can mitigate all of the problems of existing test coverage measurement methods and possible coverage achievement of the desired objectives. Of course, it is still necessary to secure more cases, and the methodology should be systematization based on the cases. Thus, if later the efficient and reliable are ensured through the application in many cases, as well as nuclear power instrumentation and control, may be used to ensure code coverage of software of the many areas where the GUI is utilized.

  13. Large-bandwidth planar photonic crystal waveguides

    DEFF Research Database (Denmark)

    Søndergaard, Thomas; Lavrinenko, Andrei

    2002-01-01

    A general design principle is presented for making finite-height photonic crystal waveguides that support leakage-free guidance of light over large frequency intervals. The large bandwidth waveguides are designed by introducing line defects in photonic crystal slabs, where the material in the line...... defect has appropriate dispersion properties relative to the photonic crystal slab material surrounding the line defect. A three-dimensional theoretical analysis is given for large-bandwidth waveguide designs based on a silicon-air photonic crystal slab suspended in air. In one example, the leakage......-free single-mode guidance is found for a large frequency interval covering 60% of the photonic band-gap....

  14. High bandwidth concurrent processing on commodity platforms

    CERN Document Server

    Boosten, M; Van der Stok, P D V

    1999-01-01

    The I/O bandwidth and real-time processing power required for high- energy physics experiments is increasing rapidly over time. The current requirements can only be met by using large-scale concurrent processing. We are investigating the use of a large PC cluster interconnected by Fast and Gigabit Ethernet to meet the performance requirements of the ATLAS second level trigger. This architecture is attractive because of its performance and competitive pricing. A major problem is obtaining frequent high-bandwidth I/O without sacrificing the CPU's processing power. We present a tight integration of a user-level scheduler and a zero-copy communication layer. This system closely approaches the performance of the underlying hardware in terms of both CPU power and I/O capacity. (0 refs).

  15. Time-optimal control with finite bandwidth

    Science.gov (United States)

    Hirose, M.; Cappellaro, P.

    2018-04-01

    Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.

  16. Achieving increased bandwidth for 4 degree of freedom self-tuning energy harvester

    Science.gov (United States)

    Staaf, L. G. H.; Smith, A. D.; Köhler, E.; Lundgren, P.; Folkow, P. D.; Enoksson, P.

    2018-04-01

    The frequency response of a self-tuning energy harvester composed of two piezoelectric cantilevers connected by a middle beam with a sliding mass is investigated. Measurements show that incorporation of a free-sliding mass increases the bandwidth. Using an analytical model, the system is explained through close investigation of the resonance modes. Resonance mode behavior further suggests that, by breaking the symmetry of the system, even broader bandwidths are achievable.

  17. Development and Attestation of Gamma-Ray Measurement Methodologies for use by Rostekhnadzor Inspectors in the Russian Federation

    International Nuclear Information System (INIS)

    Jeff Sanders

    2006-01-01

    Development and attestation of gamma-ray non-destructive assay measurement methodologies for use by inspectors of the Russian Federal Service for Environmental, Technological, and Nuclear Oversight (Rostekhnadzor, formerly Gosatomnadzor or GAN), as well as for use by Russian nuclear facilities, has been completed. Specifically, a methodology utilizing the gamma-ray multi group analysis (MGA) method for determining plutonium isotopic composition has been developed, while existing methodologies to determining uranium enrichment and isotopic composition have been revised to make them more appropriate to the material types and conditions present in nuclear facilities in the Russian Federation. This paper will discuss the development and revision of these methodologies, the metrological characteristics of the final methodologies, as well as the limitations and concerns specific to the utilization of these analysis methods in the Russian Federation

  18. Probabilistic Bandwidth Assignment in Wireless Sensor Networks

    OpenAIRE

    Khan , Dawood; Nefzi , Bilel; Santinelli , Luca; Song , Ye-Qiong

    2012-01-01

    International audience; With this paper we offer an insight in designing and analyzing wireless sensor networks in a versatile manner. Our framework applies probabilistic and component-based design principles for the wireless sensor network modeling and consequently analysis; while maintaining flexibility and accuracy. In particular, we address the problem of allocating and reconfiguring the available bandwidth. The framework has been successfully implemented in IEEE 802.15.4 using an Admissi...

  19. Polybinary modulation for bandwidth limited optical links

    DEFF Research Database (Denmark)

    Vegas Olmos, Juan José; Jurado-Navas, Antonio

    2015-01-01

    form of partial response modulation, employs simple codification and filtering at the transmitter to drastically increase the spectral efficiency. At the receiver side, poly binary modulation requires low complexity direct detection and very little digital signal processing. This talk will review...... the recent results on poly binary modulation, comprising both binary and multilevel signals as seed signals. The results will show how poly binary modulation effectively reduces the bandwidth requirements on optical links while providing high spectral efficiency....

  20. Comparison of fungal spores concentrations measured with wideband integrated bioaerosol sensor and Hirst methodology

    Science.gov (United States)

    Fernández-Rodríguez, S.; Tormo-Molina, R.; Lemonis, N.; Clot, B.; O'Connor, D. J.; Sodeau, John R.

    2018-02-01

    The aim of this work was to provide both a comparison of traditional and novel methodologies for airborne spores detection (i.e. the Hirst Burkard trap and WIBS-4) and the first quantitative study of airborne fungal concentrations in Payerne (Western Switzerland) as well as their relation to meteorological parameters. From the traditional method -Hirst trap and microscope analysis-, sixty-three propagule types (spores, sporangia and hyphae) were identified and the average spore concentrations measured over the full period amounted to 4145 ± 263.0 spores/m3. Maximum values were reached on July 19th and on August 6th. Twenty-six spore types reached average levels above 10 spores/m3. Airborne fungal propagules in Payerne showed a clear seasonal pattern, increasing from low values in early spring to maxima in summer. Daily average concentrations above 5000 spores/m3 were almost constant in summer from mid-June onwards. Weather parameters showed a relevant role for determining the observed spore concentrations. Coniferous forest, dominant in the surroundings, may be a relevant source for airborne fungal propagules as their distribution and predominant wind directions are consistent with the origin. The comparison between the two methodologies used in this campaign showed remarkably consistent patterns throughout the campaign. A correlation coefficient of 0.9 (CI 0.76-0.96) was seen between the two over the time period for daily resolutions (Hirst trap and WIBS-4). This apparent co-linearity was seen to fall away once increased resolution was employed. However at higher resolutions upon removal of Cladosporium species from the total fungal concentrations (Hirst trap), an increased correlation coefficient was again noted between the two instruments (R = 0.81 with confidence intervals of 0.74 and 0.86).

  1. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  2. A novel methodology for online measurement of thoron using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Eappen, K.P.; Sapra, B.K.; Mayya, Y.S.

    2007-01-01

    The use of Lucas scintillation cell (LSC) technique for thoron estimation requires a modified methodology as opposed to radon estimation. While in the latter, the α counting is performed after a delay period varying between few hours to few days, in the case of thoron estimation the α counting has to be carried out immediately after sampling owing to the short half-life of thoron (55 s). This can be achieved best by having an on-line LSC sampling and counting system. However, half-life of the thoron decay product 212 Pb being 10.6 h, the background accumulates in LSC during online measurements and hence subsequent use of LSC is erroneous unless normal background level is achieved in the cell. This problem can be circumvented by correcting for the average background counts accumulated during the counting period which may be theoretically estimated. In this study, a methodology has been developed to estimate the true counts due to thoron. A linear regression between the counts obtained experimentally and the fractional decay in regular intervals of time is used to obtain the actual thoron concentration. The novelty of this approach is that the background of the cell is automatically estimated as the intercept of the regression graph. The results obtained by this technique compare well with the two filter method and the thoron concentration produced from a standard thoron source. However, the LSC as such cannot be used for environmental samples because the minimum detection level is comparable with that of thoron concentrations prevailing in normal atmosphere

  3. Does Methodological Guidance Produce Consistency? A Review of Methodological Consistency in Breast Cancer Utility Value Measurement in NICE Single Technology Appraisals.

    Science.gov (United States)

    Rose, Micah; Rice, Stephen; Craig, Dawn

    2017-07-05

    Since 2004, National Institute for Health and Care Excellence (NICE) methodological guidance for technology appraisals has emphasised a strong preference for using the validated EuroQol 5-Dimensions (EQ-5D) quality-of-life instrument, measuring patient health status from patients or carers, and using the general public's preference-based valuation of different health states when assessing health benefits in economic evaluations. The aim of this study was to review all NICE single technology appraisals (STAs) for breast cancer treatments to explore consistency in the use of utility scores in light of NICE methodological guidance. A review of all published breast cancer STAs was undertaken using all publicly available STA documents for each included assessment. Utility scores were assessed for consistency with NICE-preferred methods and original data sources. Furthermore, academic assessment group work undertaken during the STA process was examined to evaluate the emphasis of NICE-preferred quality-of-life measurement methods. Twelve breast cancer STAs were identified, and many STAs used evidence that did not follow NICE's preferred utility score measurement methods. Recent STA submissions show companies using EQ-5D and mapping. Academic assessment groups rarely emphasized NICE-preferred methods, and queries about preferred methods were rare. While there appears to be a trend in recent STA submissions towards following NICE methodological guidance, historically STA guidance in breast cancer has generally not used NICE's preferred methods. Future STAs in breast cancer and reviews of older guidance should ensure that utility measurement methods are consistent with the NICE reference case to help produce consistent, equitable decision making.

  4. THz-bandwidth photonic Hilbert transformers based on fiber Bragg gratings in transmission.

    Science.gov (United States)

    Fernández-Ruiz, María R; Wang, Lixian; Carballar, Alejandro; Burla, Maurizio; Azaña, José; LaRochelle, Sophie

    2015-01-01

    THz-bandwidth photonic Hilbert transformers (PHTs) are implemented for the first time, to the best of our knowledge, based on fiber Bragg grating (FBG) technology. To increase the practical bandwidth limitation of FBGs (typically <200  GHz), a superstructure based on two superimposed linearly-chirped FBGs operating in transmission has been employed. The use of a transmission FBG involves first a conversion of the non-minimum phase response of the PHT into a minimum-phase response by adding an anticipated instantaneous component to the desired system temporal impulse response. Using this methodology, a 3-THz-bandwidth integer PHT and a fractional (order 0.81) PHT are designed, fabricated, and successfully characterized.

  5. Measuring Effectiveness in Digital Game-Based Learning: A Methodological Review.

    Directory of Open Access Journals (Sweden)

    Anissa All

    2014-06-01

    Full Text Available In recent years, a growing number of studies are being conducted into the effectiveness of digital game-based learning (DGBL. Despite this growing interest, there is a lack of sound empirical evidence on the effectiveness of DGBL due to different outcome measures for assessing effectiveness, varying methods of data collection and inconclusive or difficult to interpret results. This has resulted in a need for an overarching methodology for assessing the effectiveness of DGBL. The present study took a first step in this direction by mapping current methods used for assessing the effectiveness of DGBL. Results showed that currently, comparison of results across studies and thus looking at effectiveness of DGBL on a more general level is problematic due to diversity in and suboptimal study designs. Variety in study design relates to three issues, namely different activities that are implemented in the control groups, different measures for assessing the effectiveness of DGBL and the use of different statistical techniques for analyzing learning outcomes. Suboptimal study designs are the result of variables confounding study results. Possible confounds that were brought forward in this review are elements that are added to the game as part of the educational intervention (e.g., required reading, debriefing session, instructor influences and practice effects when using the same test pre- and post-intervention. Lastly, incomplete information on the study design impedes replication of studies and thus falsification of study results.

  6. Characterization of gloss properties of differently treated polymer coating surfaces by surface clarity measurement methodology.

    Science.gov (United States)

    Gruber, Dieter P; Buder-Stroisznigg, Michael; Wallner, Gernot; Strauß, Bernhard; Jandel, Lothar; Lang, Reinhold W

    2012-07-10

    With one measurement configuration, existing gloss measurement methodologies are generally restricted to specific gloss levels. A newly developed image-analytical gloss parameter called "clarity" provides the possibility to describe the perceptual result of a broad range of different gloss levels with one setup. In order to analyze and finally monitor the perceived gloss of products, a fast and flexible method also for the automated inspection is highly demanded. The clarity parameter is very fast to calculate and therefore usable for fast in-line surface inspection. Coated metal specimens were deformed by varying degree and polished afterwards in order to study the clarity parameter regarding the quantification of varying surface gloss types and levels. In order to analyze the correlation with the human gloss perception a study was carried out in which experts were asked to assess gloss properties of a series of surface samples under standardized conditions. The study confirmed clarity to exhibit considerably better correlation to the human perception than alternative gloss parameters.

  7. Enabling Mobile Communications for the Needy: Affordability Methodology, and Approaches to Requalify Universal Service Measures

    Directory of Open Access Journals (Sweden)

    Louis-Francois PAU

    2009-01-01

    Full Text Available This paper links communications and media usage to social and household economics boundaries. It highlights that in present day society, communications and media are a necessity, but not always affordable, and that they furthermore open up for addictive behaviors which raise additional financial and social risks. A simple and efficient methodology compatible with state-of-the-art social and communications business statistics is developed, which produces the residual communications and media affordability budget and ultimately the value-at-risk in terms of usage and tariffs. Sensitivity analysis provides precious information on bottom-up communications and media adoption on the basis of affordability. This approach differs from the regulated but often ineffective Universal service obligation, which instead of catering for individual needs mostly addresses macro-measures helping geographical access coverage (e.g. in rural areas. It is proposed to requalify the Universal service obligations on operators into concrete measures, allowing, with unchanged funding, the needy to adopt mobile services based on their affordability constraints by bridging the gap to a standard tariff. Case data are surveyed from various countries. ICT policy recommendations are made to support widespread and socially responsible communications access.

  8. Efficient Bandwidth Management for Ethernet Passive Optical Networks

    KAUST Repository

    Elrasad, Amr

    2016-01-01

    The increasing bandwidth demands in access networks motivates network operators, networking devices manufacturers, and standardization institutions to search for new approaches for access networks. These approaches should support higher bandwidth

  9. Estimating individual listeners’ auditory-filter bandwidth in simultaneous and non-simultaneous masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Caminade, Sabine; Strelcyk, Olaf

    2010-01-01

    Frequency selectivity in the human auditory system is often measured using simultaneous masking of tones presented in notched noise. Based on such masking data, the equivalent rectangular bandwidth (ERB) of the auditory filters can be derived by applying the power spectrum model of masking....... Considering bandwidth estimates from previous studies based on forward masking, only average data across a number of subjects have been considered. The present study is concerned with bandwidth estimates in simultaneous and forward masking in individual normal-hearing subjects. In order to investigate...... the reliability of the individual estimates, a statistical resampling method is applied. It is demonstrated that a rather large set of experimental data is required to reliably estimate auditory filter bandwidth, particularly in the case of simultaneous masking. The poor overall reliability of the filter...

  10. Low Bandwidth Vocoding using EM Sensor and Acoustic Signal Processing

    International Nuclear Information System (INIS)

    Ng, L C; Holzrichter, J F; Larson, P E

    2001-01-01

    Low-power EM radar-like sensors have made it possible to measure properties of the human speech production system in real-time, without acoustic interference [1]. By combining these data with the corresponding acoustic signal, we've demonstrated an almost 10-fold bandwidth reduction in speech compression, compared to a standard 2.4 kbps LPC10 protocol used in the STU-III (Secure Terminal Unit, third generation) telephone. This paper describes a potential EM sensor/acoustic based vocoder implementation

  11. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

    NARCIS (Netherlands)

    Terwee, C.B.; Mokkink, L.B.; Knol, D.L.; Ostelo, R.W.J.G.; Bouter, L.M.; de Vet, H.C.W.

    2012-01-01

    Background: The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a

  12. Fluid limits for Bandwidth-Sharing Networks with Impatience.

    NARCIS (Netherlands)

    Remerova, M.; Reed, J.; Zwart, A.P.

    2014-01-01

    Bandwidth-sharing networks as introduced by Roberts and Massoulié [Roberts JW, Massoulié L (1998) Bandwidth sharing and admission control for elastic traffic. Proc. ITC Specialist Seminar, Yokohama, Japan], Massoulié and Roberts [Massoulié L, Roberts JW (1999) Bandwidth sharing: Objectives and

  13. Low and Expensive Bandwidth Remains Key Bottleneck for ...

    African Journals Online (AJOL)

    These bottlenecks have dwarfed the expectations of the citizens to fully participate in the new world economic order galvanized by e-commerce and world trade. It is estimated that M.I.T in Boston USA has bandwidth allocation that surpasses all the bandwidth allocated to Nigeria put together. Low bandwidth has been found ...

  14. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    Science.gov (United States)

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. A wide-bandwidth and high-sensitivity robust microgyroscope

    International Nuclear Information System (INIS)

    Sahin, Korhan; Sahin, Emre; Akin, Tayfun; Alper, Said Emre

    2009-01-01

    This paper reports a microgyroscope design concept with the help of a 2 degrees of freedom (DoF) sense mode to achieve a wide bandwidth without sacrificing mechanical and electronic sensitivity and to obtain robust operation against variations under ambient conditions. The design concept is demonstrated with a tuning fork microgyroscope fabricated with an in-house silicon-on-glass micromachining process. When the fabricated gyroscope is operated with a relatively wide bandwidth of 1 kHz, measurements show a relatively high raw mechanical sensitivity of 131 µV (° s −1 ) −1 . The variation in the amplified mechanical sensitivity (scale factor) of the gyroscope is measured to be less than 0.38% for large ambient pressure variations such as from 40 to 500 mTorr. The bias instability and angle random walk of the gyroscope are measured to be 131° h −1 and 1.15° h −1/2 , respectively

  16. Innovative Methodologies for thermal Energy Release Measurement: case of La Solfatara volcano (Italy)

    Science.gov (United States)

    Marfe`, Barbara; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Marotta, Enrica; Peluso, Rosario

    2015-04-01

    This work is devoted to improve the knowledge on the parameters that control the heat flux anomalies associated with the diffuse degassing processes of volcanic and hydrothermal areas. The methodologies currently used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. A new method, based on the use of thermal imaging cameras, has been applied to estimate the heat flux and its time variations. This approach will allow faster heat flux measurement than already accredited methods, improving in this way the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The idea is to extrapolate the heat flux from the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. We use thermal imaging cameras, at short distances (meters to hundreds of meters), to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature. Preliminary studies have been carried out throughout the whole of the La Solfatara crater in order to investigate a possible correlation between the surface temperature and the shallow thermal gradient. We have used a FLIR SC640 thermal camera and K type thermocouples to assess the two measurements at the same time. Results suggest a good correlation between the shallow temperature gradient ΔTs and the surface temperature Ts depurated from background, and despite the campaigns took place during a period of time of a few years, this correlation seems to be stable over the time. This is an extremely motivating result for a further development of a measurement method based only on the use of small range thermal imaging camera. Surveys with thermal cameras may be manually done using a tripod to take thermal images of small contiguous areas and then joining

  17. Measuring resource inequalities. The concepts and methodology for an area-based Gini coefficient

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Although inequalities in income and expenditure are relatively well researched, comparatively little attention has been paid, to date, to inequalities in resource use. This is clearly a shortcoming when it comes to developing informed policies for sustainable consumption and social justice. This paper describes an indicator of inequality in resource use called the AR-Gini. The AR-Gini is an area-based measure of resource inequality that estimates inequalities between neighbourhoods with regard to the consumption of specific consumer goods. It is also capable of estimating inequalities in the emissions resulting from resource use, such as carbon dioxide emissions from energy use, and solid waste arisings from material resource use. The indicator is designed to be used as a basis for broadening the discussion concerning 'food deserts' to inequalities in other types of resource use. By estimating the AR-Gini for a wide range of goods and services we aim to enhance our understanding of resource inequalities and their drivers, identify which resources have highest inequalities, and to explore trends in inequalities. The paper describes the concepts underlying the construction of the AR-Gini and its methodology. Its use is illustrated by pilot applications (specifically, men's and boys' clothing, carpets, refrigerators/freezers and clothes washer/driers). The results illustrate that different levels of inequality are associated with different commodities. The paper concludes with a brief discussion of some possible policy implications of the AR-Gini. (author)

  18. Thermal-Diffusivity Measurements of Mexican Citrus Essential Oils Using Photoacoustic Methodology in the Transmission Configuration

    Science.gov (United States)

    Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.

    2011-05-01

    Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.

  19. Numerical simulation and analysis of fuzzy PID and PSD control methodologies as dynamic energy efficiency measures

    International Nuclear Information System (INIS)

    Ardehali, M.M.; Saboori, M.; Teshnelab, M.

    2004-01-01

    Energy efficiency enhancement is achieved by utilizing control algorithms that reduce overshoots and undershoots as well as unnecessary fluctuations in the amount of energy input to energy consuming systems during transient operation periods. It is hypothesized that application of control methodologies with characteristics that change with time and according to the system dynamics, identified as dynamic energy efficiency measures (DEEM), achieves the desired enhancement. The objective of this study is to simulate and analyze the effects of fuzzy logic based tuning of proportional integral derivative (F-PID) and proportional sum derivative (F-PSD) controllers for a heating and cooling energy system while accounting for the dynamics of the major system components. The procedure to achieve the objective includes utilization of fuzzy logic rules to determine the PID and PSD controllers gain coefficients so that the control laws for regulating the heat exchangers heating or cooling energy inputs are determined in each time step of the operation period. The performances of the F-PID and F-PSD controllers are measured by means of two cost functions that are based on quadratic forms of the energy input and deviation from a set point temperature. It is found that application of the F-PID control algorithm, as a DEEM, results in lower costs for energy input and deviation from a set point temperature by 24% and 17% as compared to a PID and 13% and 8% as compared to a PSD, respectively. It is also shown that the F-PSD performance is better than that of the F-PID controller

  20. A methodology for supporting decisions on the establishment of protective measures after severe nuclear accidents

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Kollas, J.G.

    1994-06-01

    The objective of this report is to demonstrate the use of a methology supporting decisions on protective measures following severe nuclear accidents. A multicriteria decision analysis approach is recommended where value tradeoffs are postponed until the very last stage of the decision process. Use of efficient frontiers is made to exclude all technically inferior solutions and present the decision maker with all nondominated solutions. A choice among these solutions implies a value trade-off among the multiple criteria. An interactive computer packge has been developed where the decision maker can choose a point on the efficient frontier in the consequence space and immediately see the alternative in the decision space resulting in the chosen consequences. The methodology is demonstrated through an application on the choice among possible protective measures in contaminated areas of the former USSR after the Chernobyl accident. Two distinct cases are considered: First a decision is to be made only on the basis of the level of soil contamination with Cs-137 and the total cost of the chosen protective policy; Next the decision is based on the geographic dimension of the contamination ant the total cost. Three alternative countermeasure actions are considered for population segments living on soil contaminated at a certain level or in a specific geographic region: (a) relocation of the population; (b) improvement of the living conditions; and, (c) no countermeasures at all. This is final deliverable of the CEC-CIS Joint Study Project 2, Task 5: Decision-Aiding-System for Establishing Intervention Levels, performed under Contracts COSU-CT91-0007 and COSU-CT92-0021 with the Commission of European Communities through CEPN

  1. Measuring the impact of methodological research: a framework and methods to identify evidence of impact.

    Science.gov (United States)

    Brueton, Valerie C; Vale, Claire L; Choodari-Oskooei, Babak; Jinks, Rachel; Tierney, Jayne F

    2014-11-27

    Providing evidence of impact highlights the benefits of medical research to society. Such evidence is increasingly requested by research funders and commonly relies on citation analysis. However, other indicators may be more informative. Although frameworks to demonstrate the impact of clinical research have been reported, no complementary framework exists for methodological research. Therefore, we assessed the impact of methodological research projects conducted or completed between 2009 and 2012 at the UK Medical Research Council Clinical Trials Unit Hub for Trials Methodology Research Hub, with a view to developing an appropriate framework. Various approaches to the collection of data on research impact were employed. Citation rates were obtained using Web of Science (http://www.webofknowledge.com/) and analyzed descriptively. Semistructured interviews were conducted to obtain information on the rates of different types of research output that indicated impact for each project. Results were then pooled across all projects. Finally, email queries pertaining to methodology projects were collected retrospectively and their content analyzed. Simple citation analysis established the citation rates per year since publication for 74 methodological publications; however, further detailed analysis revealed more about the potential influence of these citations. Interviews that spanned 20 individual research projects demonstrated a variety of types of impact not otherwise collated, for example, applications and further developments of the research; release of software and provision of guidance materials to facilitate uptake; formation of new collaborations and broad dissemination. Finally, 194 email queries relating to 6 methodological projects were received from 170 individuals across 23 countries. They provided further evidence that the methodologies were impacting on research and research practice, both nationally and internationally. We have used the information

  2. Quantification of Material Fluorescence and Light Scattering Cross Sections Using Ratiometric Bandwidth-Varied Polarized Resonance Synchronous Spectroscopy.

    Science.gov (United States)

    Xu, Joanna Xiuzhu; Hu, Juan; Zhang, Dongmao

    2018-05-25

    Presented herein is the ratiometric bandwidth-varied polarized resonance synchronous spectroscopy (BVPRS2) method for quantification of material optical activity spectra. These include the sample light absorption and scattering cross-section spectrum, the scattering depolarization spectrum, and the fluorescence emission cross-section and depolarization spectrum in the wavelength region where the sample both absorbs and emits. This ratiometric BVPRS2 spectroscopic method is a self-contained technique capable of quantitatively decoupling material fluorescence and light scattering signal contribution to its ratiometric BVPRS2 spectra through the linear curve-fitting of the ratiometric BVPRS2 signal as a function of the wavelength bandwidth used in the PRS2 measurements. Example applications of this new spectroscopic method are demonstrated with materials that can be approximated as pure scatterers, simultaneous photon absorbers/emitters, simultaneous photon absorbers/scatterers, and finally simultaneous photon absorbers/scatterers/emitters. Because the only instruments needed for this ratiometric BVPRS2 technique are the conventional UV-vis spectrophotometer and spectrofluorometer, this work should open doors for routine decomposition of material UV-vis extinction spectrum into its absorption and scattering component spectra. The methodology and insights provided in this work should be of broad significance to all chemical research that involves photon/matter interactions.

  3. Net ecosystem carbon dioxide exchange in tropical rainforests - sensitivity to environmental drivers and flux measurement methodology

    Science.gov (United States)

    Fu, Z.; Stoy, P. C.

    2017-12-01

    Tropical rainforests play a central role in the Earth system services of carbon metabolism, climate regulation, biodiversity maintenance, and more. They are under threat by direct anthropogenic effects including deforestation and indirect anthropogenic effects including climate change. A synthesis of the factors that determine the net ecosystem exchange of carbon dioxide (NEE) across multiple time scales in different tropical rainforests has not been undertaken to date. Here, we study NEE and its components, gross primary productivity (GPP) and ecosystem respiration (RE), across thirteen tropical rainforest research sites with 63 total site-years of eddy covariance data. Results reveal that the five ecosystems that have greater carbon uptakes (with the magnitude of GPP greater than 3000 g C m-2 y-1) sequester less carbon - or even lose it - on an annual basis at the ecosystem scale. This counterintuitive result is because high GPP is compensated by similar magnitudes of RE. Sites that provided subcanopy CO2 storage observations had higher average magnitudes of GPP and RE and consequently lower NEE, highlighting the importance of measurement methodology for understanding carbon dynamics in tropical rainforests. Vapor pressure deficit (VPD) constrained GPP at all sites, but to differing degrees. Many environmental variables are significantly related to NEE at time scales greater than one year, and NEE at a rainforest in Malaysia is significantly related to soil moisture variability at seasonal time scales. Climate projections from 13 general circulation models (CMIP5) under representative concentration pathway (RCP) 8.5 suggest that many current tropical rainforest sites on the cooler end of the current temperature range are likely to reach a climate space similar to present-day warmer sites by the year 2050, and warmer sites will reach a climate space not currently experienced. Results demonstrate the need to quantify if mature tropical trees acclimate to heat and

  4. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  5. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist.

    Science.gov (United States)

    Terwee, Caroline B; Mokkink, Lidwine B; Knol, Dirk L; Ostelo, Raymond W J G; Bouter, Lex M; de Vet, Henrica C W

    2012-05-01

    The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties.

  6. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  7. Bandwidth Analysis of Smart Meter Network Infrastructure

    DEFF Research Database (Denmark)

    Balachandran, Kardi; Olsen, Rasmus Løvenstein; Pedersen, Jens Myrup

    2014-01-01

    Advanced Metering Infrastructure (AMI) is a net-work infrastructure in Smart Grid, which links the electricity customers to the utility company. This network enables smart services by making it possible for the utility company to get an overview of their customers power consumption and also control...... devices in their costumers household e.g. heat pumps. With these smart services, utility companies can do load balancing on the grid by shifting load using resources the customers have. The problem investigated in this paper is what bandwidth require-ments can be expected when implementing such network...... to utilize smart meters and which existing broadband network technologies can facilitate this smart meter service. Initially, scenarios for smart meter infrastructure are identified. The paper defines abstraction models which cover the AMI scenarios. When the scenario has been identified a general overview...

  8. Development of high frequency and wide bandwidth Johnson noise thermometry

    International Nuclear Information System (INIS)

    Crossno, Jesse; Liu, Xiaomeng; Kim, Philip; Ohki, Thomas A.; Fong, Kin Chung

    2015-01-01

    We develop a high frequency, wide bandwidth radiometer operating at room temperature, which augments the traditional technique of Johnson noise thermometry for nanoscale thermal transport studies. Employing low noise amplifiers and an analog multiplier operating at 2 GHz, auto- and cross-correlated Johnson noise measurements are performed in the temperature range of 3 to 300 K, achieving a sensitivity of 5.5 mK (110 ppm) in 1 s of integration time. This setup allows us to measure the thermal conductance of a boron nitride encapsulated monolayer graphene device over a wide temperature range. Our data show a high power law (T ∼ 4) deviation from the Wiedemann-Franz law above T ∼ 100 K

  9. Validity and reliability of using photography for measuring knee range of motion: a methodological study

    Directory of Open Access Journals (Sweden)

    Adie Sam

    2011-04-01

    Full Text Available Abstract Background The clinimetric properties of knee goniometry are essential to appreciate in light of its extensive use in the orthopaedic and rehabilitative communities. Intra-observer reliability is thought to be satisfactory, but the validity and inter-rater reliability of knee goniometry often demonstrate unacceptable levels of variation. This study tests the validity and reliability of measuring knee range of motion using goniometry and photographic records. Methods Design: Methodology study assessing the validity and reliability of one method ('Marker Method' which uses a skin marker over the greater trochanter and another method ('Line of Femur Method' which requires estimation of the line of femur. Setting: Radiology and orthopaedic departments of two teaching hospitals. Participants: 31 volunteers (13 arthritic and 18 healthy subjects. Knee range of motion was measured radiographically and photographically using a goniometer. Three assessors were assessed for reliability and validity. Main outcomes: Agreement between methods and within raters was assessed using concordance correlation coefficient (CCCs. Agreement between raters was assessed using intra-class correlation coefficients (ICCs. 95% limits of agreement for the mean difference for all paired comparisons were computed. Results Validity (referenced to radiographs: Each method for all 3 raters yielded very high CCCs for flexion (0.975 to 0.988, and moderate to substantial CCCs for extension angles (0.478 to 0.678. The mean differences and 95% limits of agreement were narrower for flexion than they were for extension. Intra-rater reliability: For flexion and extension, very high CCCs were attained for all 3 raters for both methods with slightly greater CCCs seen for flexion (CCCs varied from 0.981 to 0.998. Inter-rater reliability: For both methods, very high ICCs (min to max: 0.891 to 0.995 were obtained for flexion and extension. Slightly higher coefficients were obtained

  10. Evaluation of methodological aspects of digestibility measurements in ponies fed different haylage to concentrate ratios.

    Science.gov (United States)

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; van Riet, M M J; Visser, P; Blok, M C; Hendriks, W H

    2017-11-01

    Methodological aspects of digestibility measurements were studied in four Welsh pony geldings consuming haylage-based diets with increasing proportions of a pelleted concentrate according to a 4×4 Latin square design experiment. Ponies were fed four experimental, iso-energetic (net energy (NE) basis) diets (i.e. 22 MJ NE/day) with increasing proportions of a pelleted concentrate (C) in relation to haylage (H). The absolute amounts of diet dry matter fed per day were 4.48 kg of H (100H), 3.36 and 0.73 kg of H and C (75H25C), 2.24 and 1.45 kg of H and C (50H50C) and 1.12 and 2.17 kg of H and C (25H75C). Diets were supplemented with minerals, vitamins and TiO2 (3.7 g Ti/day). Voluntary voided faeces were quantitatively collected daily during 10 consecutive days and analysed for moisture, ash, ADL, acid-insoluble ash (AIA) and Ti. A minimum faeces collection period of 6 consecutive days, along with a 14-day period to adapt the animals to the diets and become accustomed to the collection procedure, is recommended to obtain accurate estimations on dry matter digestibility and organic matter digestibility (OMD) in equids fed haylage-based diets supplemented with concentrate. In addition, the recovery of AIA, ADL and Ti was determined and evaluated. Mean faecal recovery over 10 consecutive days across diets for AIA, ADL and Ti was 124.9% (SEM 2.9), 108.7% (SEM 2.0) and 97.5% (SEM 0.9), respectively. Cumulative faecal recovery of AIA significantly differed between treatments, indicating that AIA is inadequate to estimate the OMD in equines. In addition, evaluation of the CV of mean cumulative faecal recoveries obtained by AIA, ADL and Ti showed greater variations in faecal excretion of AIA (9.1) and ADL (7.4) than Ti (3.7). The accuracy of prediction of OMD was higher with the use of Ti than ADL. The use of Ti is preferred as a marker in digestibility trials in equines fed haylage-based diets supplemented with increasing amounts of pelleted concentrate.

  11. Optical Sideband Generation: a Longitudinal Electron Beam Diagnostic Beyond the Laser Bandwidth Resolution Limit

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Berkeley National Laboratory; Tilborg, J. van; Matlis, N. H.; Plateau, G. R.; Leemans, W. P.

    2010-06-01

    Electro-optic sampling (EOS) is widely used as a technique to measure THz-domain electric field pulses such asthe self-fields of femtosecond electron beams. We present an EOS-based approach for single-shot spectral measurement that excels in simplicity (compatible with fiber integration) and bandwidth coverage (overcomes the laser bandwidth limitation), allowing few-fs electron beams or single-cycle THz pulses to be characterized with conventional picosecond probes. It is shown that the EOS-induced optical sidebands on the narrow-bandwidth optical probe are spectrally-shifted replicas of the THz pulse. An experimental demonstration on a 0-3 THz source is presented.

  12. Investigation of Diagonal Antenna-Chassis Mode in Mobile Terminal LTE MIMO Antennas for Bandwidth Enhancement

    DEFF Research Database (Denmark)

    Zhang, Shuai; Zhao, Kun; Ying, Zhinong

    2015-01-01

    mechanism of the mismatch of these three bandwidth ranges is also explained. Furthermore, the diagonal antenna-chassis mode is also studied for MIMO elements in the adjacent and diagonal corner locations. As a practical example, a wideband collocated LTE MIMO antenna is proposed and measured. It covers......A diagonal antenna-chassis mode is investigated in long-term evolution multiple-input-multiple-output (LTE MIMO) antennas. The MIMO bandwidth is defined in this paper as the overlap range of the low-envelope correlation coefficient, high total efficiency, and -6-dB impedance matching bandwidths...... the bands of 740960 and 1700-2700 MHz, where the total efficiencies are better than -3.4 and -1.8 dB, with lower than 0.5 and 0.1, respectively. The measurements agree well with the simulations. Since the proposed method only needs to modify the excitation locations of the MIMO elements on the chassis...

  13. PIC Simulation of Laser Plasma Interactions with Temporal Bandwidths

    Science.gov (United States)

    Tsung, Frank; Weaver, J.; Lehmberg, R.

    2015-11-01

    We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temperal bandwidths under conditions relevant to current and future shock ignition experiments on the NIKE laser. Our simulations show that, for sufficiently large bandwidth, the saturation level, and the distribution of hot electrons, can be effected by the addition of temporal bandwidths (which can be accomplished in experiments using smoothing techniques such as SSD or ISI). We will show that temporal bandwidth along play an important role in the control of LPI's in these lasers and discuss future directions. This work is conducted under the auspices of NRL.

  14. Improving process methodology for measuring plutonium burden in human urine using fission track analysis

    International Nuclear Information System (INIS)

    Krahenbuhl, M.P.; Slaughter, D.M.

    1998-01-01

    The aim of this paper is to clearly define the chemical and nuclear principles governing Fission Track Analysis (FTA) to determine environmental levels of 239 Pu in urine. The paper also addresses deficiencies in FTA methodology and introduces improvements to make FTA a more reliable research tool. Our refined methodology, described herein, includes a chemically-induced precipitation phase, followed by anion exchange chromatography and employs a chemical tracer, 236 Pu. We have been able to establish an inverse correlation between Pu recovery and sample volume and our data confirms that increases in sample volume do not result in higher accuracy or lower detection limits. We conclude that in subsequent studies, samples should be limited to approximately two liters. The Pu detection limit for a sample of this volume is 2.8 μBq/l. (author)

  15. Gust factor based on research aircraft measurements: A new methodology applied to the Arctic marine boundary layer

    DEFF Research Database (Denmark)

    Suomi, Irene; Lüpkes, Christof; Hartmann, Jörg

    2016-01-01

    There is as yet no standard methodology for measuring wind gusts from a moving platform. To address this, we have developed a method to derive gusts from research aircraft data. First we evaluated four different approaches, including Taylor's hypothesis of frozen turbulence, to derive the gust...... in unstable conditions (R2=0.52). The mean errors for all methods were low, from -0.02 to 0.05, indicating that wind gust factors can indeed be measured from research aircraft. Moreover, we showed that aircraft can provide gust measurements within the whole boundary layer, if horizontal legs are flown...

  16. Ultrawide Bandwidth Receiver Based on a Multivariate Generalized Gaussian Distribution

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-04-01

    Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulse-based ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to multivariate Gaussian, multivariate Laplacian and multivariate Gaussian-Laplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified Kullback-Leibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. A receiver based on the principles of minimum bit error rate is designed for the MGGD pdf. As the requirement is stringent, the adaptive implementation of the receiver is also carried out in this paper. Training sequence of the desired user is the only requirement when implementing the detector adaptively. © 2002-2012 IEEE.

  17. Wide modulation bandwidth terahertz detection in 130 nm CMOS technology

    Science.gov (United States)

    Nahar, Shamsun; Shafee, Marwah; Blin, Stéphane; Pénarier, Annick; Nouvel, Philippe; Coquillat, Dominique; Safwa, Amr M. E.; Knap, Wojciech; Hella, Mona M.

    2016-11-01

    Design, manufacturing and measurements results for silicon plasma wave transistors based wireless communication wideband receivers operating at 300 GHz carrier frequency are presented. We show the possibility of Si-CMOS based integrated circuits, in which by: (i) specific physics based plasma wave transistor design allowing impedance matching to the antenna and the amplifier, (ii) engineering the shape of the patch antenna through a stacked resonator approach and (iii) applying bandwidth enhancement strategies to the design of integrated broadband amplifier, we achieve an integrated circuit of the 300 GHz carrier frequency receiver for wireless wideband operation up to/over 10 GHz. This is, to the best of our knowledge, the first demonstration of low cost 130 nm Si-CMOS technology, plasma wave transistors based fast/wideband integrated receiver operating at 300 GHz atmospheric window. These results pave the way towards future large scale (cost effective) silicon technology based terahertz wireless communication receivers.

  18. Design of ultrathin dual-resonant reflective polarization converter with customized bandwidths

    Science.gov (United States)

    Kundu, Debidas; Mohan, Akhilesh; Chakrabarty, Ajay

    2017-10-01

    In this paper, an ultrathin dual-resonant reflective polarization converter is proposed to obtain customized bandwidths using precise space-filling technique to its top geometry. The unit cell of the dual-resonant prototype consists of conductive square ring with two diagonally arranged slits, supported by metal-backed thin dielectric layer. It offers two narrow bands with fractional bandwidths of 3.98 and 6.65% and polarization conversion ratio (PCR) of 97.16 and 98.87% at 4.52 and 6.97 GHz, respectively. The resonances are brought in proximity to each other by changing the length of surface current paths of the two resonances. By virtue of this mechanism, two polarization converters with two different types of bandwidths are obtained. One polarization converter produces a full-width at half-maxima PCR bandwidth of 34%, whereas another polarization converter produces a 90% PCR bandwidth of 19%. All the proposed polarization converters are insensitive to wide variations of incident angle for both TE- and TM-polarized incident waves. Measured results show good agreement with the numerically simulated results.

  19. A study of calculation methodology and experimental measurements of the kinetic parameters for source driven subcritical systems

    International Nuclear Information System (INIS)

    Lee, Seung Min

    2009-01-01

    This work presents a theoretical study of reactor kinetics focusing on the methodology of calculation and the experimental measurements of the so-called kinetic parameters. A comparison between the methodology based on the Dulla's formalism and the classical method is made. The objective is to exhibit the dependence of the parameters on subcriticality level and perturbation. Two different slab type systems were considered: thermal one and fast one, both with homogeneous media. One group diffusion model was used for the fast reactor, and for the thermal system, two groups diffusion model, considering, in both case, only one precursor's family. The solutions were obtained using the expansion method. Also, descriptions of the main experimental methods of measurements of the kinetic parameters are presented in order to put a question about the compatibility of these methods in subcritical region. (author)

  20. Reduced bandwidth video for remote vehicle operations

    Energy Technology Data Exchange (ETDEWEB)

    Noell, T.E.; DePiero, F.W.

    1993-08-01

    Oak Ridge National Laboratory staff have developed a video compression system for low-bandwidth remote operations. The objective is to provide real-time video at data rates comparable to available tactical radio links, typically 16 to 64 thousand bits per second (kbps), while maintaining sufficient quality to achieve mission objectives. The system supports both continuous lossy transmission of black and white (gray scale) video for remote driving and progressive lossless transmission of black and white images for remote automatic target acquisition. The average data rate of the resulting bit stream is 64 kbps. This system has been demonstrated to provide video of sufficient quality to allow remote driving of a High-Mobility Multipurpose Wheeled Vehicle at speeds up to 15 mph (24.1 kph) on a moguled dirt track. The nominal driving configuration provides a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of {approximately}1s. This paper reviews the system approach and implementation, and further describes some of our experiences when using the system to support remote driving.

  1. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Science.gov (United States)

    Phillips-Smith, Catherine; Jeong, Cheol-Heon; Healy, Robert M.; Dabek-Zlotorzynska, Ewa; Celo, Valbona; Brook, Jeffrey R.; Evans, Greg

    2017-08-01

    The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter) were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010-November 2012) at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013), hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow, water, and biota samples collected

  2. High Dielectric Low Loss Transparent Glass Material Based Dielectric Resonator Antenna with Wide Bandwidth Operation

    Science.gov (United States)

    Mehmood, Arshad; Zheng, Yuliang; Braun, Hubertus; Hovhannisyan, Martun; Letz, Martin; Jakoby, Rolf

    2015-01-01

    This paper presents the application of new high permittivity and low loss glass material for antennas. This glass material is transparent. A very simple rectangular dielectric resonator antenna is designed first with a simple microstrip feeding line. In order to widen the bandwidth, the feed of the design is modified by forming a T-shaped feeding. This new design enhanced the bandwidth range to cover the WLAN 5 GHz band completely. The dielectric resonator antenna cut into precise dimensions is placed on the modified microstrip feed line. The design is simple and easy to manufacture and also very compact in size of only 36 × 28 mm. A -10 dB impedance bandwidth of 18% has been achieved, which covers the frequency range from 5.15 GHz to 5.95 GHz. Simulations of the measured return loss and radiation patterns are presented and discussed.

  3. High efficiency and broad bandwidth grating coupler between nanophotonic waveguide and fibre

    International Nuclear Information System (INIS)

    Yu, Zhu; Xue-Jun, Xu; Zhi-Yong, Li; Liang, Zhou; Yu-De, Yu; Jin-Zhong, Yu; Wei-Hua, Han; Zhong-Chao, Fan

    2010-01-01

    A high efficiency and broad bandwidth grating coupler between a silicon-on-insulator (SOI) nanophotonic waveguide and fibre is designed and fabricated. Coupling efficiencies of 46% and 25% at a wavelength of 1.55 μm are achieved by simulation and experiment, respectively. An optical 3 dB bandwidth of 45 nm from 1530 nm to 1575 nm is also obtained in experiment. Numerical calculation shows that a tolerance to fabrication error of 10 nm in etch depth is achievable. The measurement results indicate that the alignment error of ±2 μm results in less than 1 dB additional coupling loss. (classical areas of phenomenology)

  4. Bandwidth selection in smoothing functions | Kibua | East African ...

    African Journals Online (AJOL)

    ... inexpensive and, hence, worth adopting. We argue that the bandwidth parameter is determined by two factors: the kernel function and the length of the smoothing region. We give an illustrative example of its application using real data. Keywords: Kernel, Smoothing functions, Bandwidth > East African Journal of Statistics ...

  5. Variable Bandwidth Analog Channel Filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    2001-01-01

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper first explains the importance of channel filtering. Then the advantage of analog channel filtering with a variable bandwidth in a Software Defined Radio is

  6. Fluid Limits for Bandwidth-Sharing Networks in Overload.

    NARCIS (Netherlands)

    Borst, S.; Egorova, R.; Zwart, A.P.

    2014-01-01

    Bandwidth-sharing networks as considered by Roberts and Massoulié [28] (Roberts JW, Massoulié L (1998) Bandwidth sharing and admission control for elastic traffic. Proc. ITC Specialist Seminar, Yokohama, Japan) provide a natural modeling framework for describing the dynamic flow-level interaction

  7. Tactical Decision Aids High Bandwidth Links Using Autonomous Vehicles

    Science.gov (United States)

    2004-01-01

    1 Tactical Decision Aids (High Bandwidth Links Using Autonomous Vehicles ) A. J. Healey, D. P. Horner, Center for Autonomous Underwater Vehicle...SUBTITLE Tactical Decision Aids (High Bandwidth Links Using Autonomous Vehicles ) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  8. Fluid limits for bandwidth-sharing networks in overload

    NARCIS (Netherlands)

    Borst, S.C.; Egorova, R.R.; Zwart, B.

    2014-01-01

    Bandwidth-sharing networks as considered by Roberts and Massoulié [28] (Roberts JW, Massoulié L (1998) Bandwidth sharing and admission control for elastic traffic. Proc. ITC Specialist Seminar, Yokohama, Japan) provide a natural modeling framework for describing the dynamic flow-level interaction

  9. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  10. Methodology of the Auditing Measures to Civil Airport Security and Protection

    Directory of Open Access Journals (Sweden)

    Ján Kolesár

    2016-10-01

    Full Text Available Airports similarly to other companies are certified in compliance with the International Standardization Organization (ISO standards of products and services (series of ISO 9000 Standards regarding quality management, to coordinate the technical side of standardizatioon and normalization at an international scale. In order for the airports to meet the norms and the certification requirements as by the ISO they are liable to undergo strict audits of quality, as a rule, conducted by an independent auditing organization. Focus of the audits is primarily on airport operation economics and security. The article is an analysis into the methodology of the airport security audit processes and activities. Within the framework of planning, the sequence of steps is described in line with the principles and procedures of the Security Management System (SMS and starndards established by the International Standardization Organization (ISO. The methodology of conducting airport security audit is developed in compliance with the national programme and international legislation standards (Annex 17 applicable to protection of civil aviation against acts of unlawful interference.

  11. Coherence bandwidth characterization in an urban microcell at 62.4 GHz

    DEFF Research Database (Denmark)

    Sánchez, M. G.; Hammoudeh, A. M.; Grindrod, E.

    2000-01-01

    Results of experiments made at 62.4 GHz in an urban mobile radio environment to characterize the coherence bandwidth are presented. The correlation coefficients between signal envelopes separated in frequency are measured and expressed as functions of distance from the base station. Due to the hi...

  12. Low-Bandwidth Channel Quality Indication for OFDMA Frequency Domain Packet Scheduling

    DEFF Research Database (Denmark)

    Kolding, Troels E.; Frederiksen, Frank; Pokhariyal, Akhilesh

    2007-01-01

    -relevant information in the CQI. We find that a 60-70% CQI bandwidth reduction is possible with less than 5-10% impact on scheduling performance. Further, we consider the impact of lowering the CQI reporting rate on both mobility performance and increased measuring accuracy due to longer averaging interval. We find...

  13. Methodological challenges surrounding direct-to-consumer advertising research--the measurement conundrum.

    Science.gov (United States)

    Hansen, Richard A; Droege, Marcus

    2005-06-01

    Numerous studies have focused on the impact of direct-to-consumer (DTC) prescription drug advertising on consumer behavior and health outcomes. These studies have used various approaches to assess exposure to prescription drug advertising and to measure the subsequent effects of such advertisements. The objectives of this article are to (1) discuss measurement challenges involved in DTC advertising research, (2) summarize measurement approaches commonly identified in the literature, and (3) discuss contamination, time to action, and endogeneity as specific problems in measurement design and application. We conducted a review of the professional literature to identify illustrative approaches to advertising measurement. Specifically, our review of the literature focused on measurement of DTC advertising exposure and effect. We used the hierarchy-of-effects model to guide our discussion of processing and communication effects. Other effects were characterized as target audience action, sales, market share, and profit. Overall, existing studies have used a variety of approaches to measure advertising exposure and effect, yet the ability of measures to produce a valid and reliable understanding of the effects of DTC advertising can be improved. Our review provides a framework for conceptualizing DTC measurement, and can be used to identify gaps in the literature not sufficiently addressed by existing measures. Researchers should continue to explore correlations between exposure and effect of DTC advertising, but are obliged to improve and validate measurement in this area.

  14. Methodological considerations for researchers and practitioners using pedometers to measure physical (ambulatory) activity.

    Science.gov (United States)

    Tudor-Locke, C E; Myers, A M

    2001-03-01

    Researchers and practitioners require guidelines for using electronic pedometers to objectively quantify physical activity (specifically ambulatory activity) for research and surveillance as well as clinical and program applications. Methodological considerations include choice of metric and length of monitoring frame as well as different data recording and collection procedures. A systematic review of 32 empirical studies suggests we can expect 12,000-16,000 steps/day for 8-10-year-old children (lower for girls than boys); 7,000-13,000 steps/day for relatively healthy, younger adults (lower for women than men); 6,000-8,500 steps/day for healthy older adults; and 3,500-5,500 steps/day for individuals living with disabilities and chronic illnesses. These preliminary recommendations should be modified and refined, as evidence and experience using pedometers accumulates.

  15. A Novel Methodology for Measurements of an LED's Heat Dissipation Factor

    Science.gov (United States)

    Jou, R.-Y.; Haung, J.-H.

    2015-12-01

    Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.

  16. A Case Study of Six Sigma Define-Measure-Analyze-Improve-Control (DMAIC Methodology in Garment Sector

    Directory of Open Access Journals (Sweden)

    Abdur Rahman

    2017-12-01

    Full Text Available This paper demonstrates the empirical application of Six Sigma and Define-Measure-Analyze-Improve-Control (DMAIC methodology to reduce product defects within a garments manufacturing organization in Bangladesh which follows the DMAIC methodology to investigate defects, root causes and provide a solution to eliminate these defects. The analysis from employing Six Sigma and DMAIC indicated that the broken stitch and open seam influenced the number of defective products. Design of experiments (DOE and the analysis of variance (ANOVA techniques were combined to statistically determine the correlation of the broken stitch and open seam with defects as well as to define their optimum values needed to eliminate the defects. Thus, a reduction of about 35% in the garments defect was achieved, which helped the organization studied to reduce its defects and thus improve its Sigma level from 1.7 to 3.4.

  17. Assertion based verification methodology for HDL designs of primary sodium pump speed and eddy current flow measurement systems of PFBR

    International Nuclear Information System (INIS)

    Misra, M.K.; Menon, Saritha P.; Thirugnana Murthy, D.

    2013-01-01

    With the growing complexity and size of digital designs, functional verification has become a huge challenge. The validation and testing process accounts for a significant percentage of the overall development effort and cost for electronic systems. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected upfront, they can result in severe consequences - both financially and from a safety viewpoint. This paper covers the various types of verification methodologies and focuses on Assertion Based Verification Methodology for HDL designs, taking as case studies, the Primary Sodium Pump Speed and Eddy Current Flow Measurement Systems of PFBR. (author)

  18. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Parra, Jorge O.; Hackert, Chris L.; Collier, Hughbert A.; Bennett, Michael

    2002-01-29

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate NMR techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This is accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging are being linked with a balanced petrographical analysis of the core and theoretical model.

  19. Measuring subjective meaning structures by the laddering method: Theoretical considerations and methodological problems

    DEFF Research Database (Denmark)

    Grunert, Klaus G.; Grunert, Suzanne C.

    1995-01-01

    Starting from a general model of measuring cognitive structures for predicting consumer behaviour, we discuss laddering as a possible method to obtain estimates of consumption-relevant cognitive structures which will have predictive validity. Four criteria for valid measurement are derived and ap...

  20. Translating patient reported outcome measures: methodological issues explored using cognitive interviewing with three rheumatoid arthritis measures in six European languages

    NARCIS (Netherlands)

    Hewlett, Sarah E.; Nicklin, Joanna; Bode, Christina; Carmona, Loretto; Dures, Emma; Engelbrecht, Matthias; Hagel, Sofia; Kirwan, John R.; Molto, Anna; Redondo, Marta; Gossec, Laure

    2016-01-01

    Objective. Cross-cultural translation of patient-reported outcome measures (PROMs) is a lengthy process, often performed professionally. Cognitive interviewing assesses patient comprehension of PROMs. The objective was to evaluate the usefulness of cognitive interviewing to assess translations and

  1. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    Science.gov (United States)

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-04

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services. © The American Society of Tropical Medicine and Hygiene.

  2. Methodological challenges in measurements of functional ability in gerontological research. A review

    DEFF Research Database (Denmark)

    Avlund, Kirsten

    1997-01-01

    This article addresses two important challenges in the measurement of functional ability in gerontological research: the first challenge is to connect measurements to a theoretical frame of reference which enhances our understanding and interpretation of the collected data; the second relates...... procedure, validity, discriminatory power, and responsiveness. In measures of functional ability it is recommended: 1) always to consider the theoretical frame of reference as part of the validation process (e.g., the theory of "The Disablement Process"; 2) always to assess whether the included activities...

  3. From theory to 'measurement' in complex interventions: methodological lessons from the development of an e-health normalisation instrument.

    Science.gov (United States)

    Finch, Tracy L; Mair, Frances S; O'Donnell, Catherine; Murray, Elizabeth; May, Carl R

    2012-05-17

    Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field. A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals. The developed instrument was pre-tested in two professional samples (N=46; N=231). Ratings of items representing normalisation 'processes' were significantly related to staff members' perceptions of whether or not e-health had become 'routine'. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of

  4. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

    Directory of Open Access Journals (Sweden)

    Finch Tracy L

    2012-05-01

    Full Text Available Abstract Background Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1 describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2 identify key issues and methodological challenges for advancing work in this field. Methods A 30-item instrument (Technology Adoption Readiness Scale (TARS for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT. NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice was used by health care professionals. Results The developed instrument was pre-tested in two professional samples (N = 46; N = 231. Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. Conclusions To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1 greater attention to underlying theoretical assumptions and extent of translation work required; (2 the need for appropriate but flexible approaches to outcomes

  5. The professional methodological teaching performance of the professor of Physical education. Set of parameters for its measurement

    Directory of Open Access Journals (Sweden)

    Orlando Pedro Suárez Pérez

    2017-07-01

    Full Text Available This work was developed due to the need to attend to the difficulties found in the Physical Education teachers of the municipality of San Juan and Martínez during the development of the teaching-learning process of Basketball, which threaten the quality of the classes, sports results and preparation of the School for life. The objective is to propose parameters that allow measuring the professional teaching methodological performance of these teachers. The customized behavior of the research made possible the diagnosis of the 26 professors taken as a sample, expressing the traits that distinguish their efficiency, determining their potentialities and deficiencies. During the research process, theoretical, empirical and statistical methods were used, which permitted to corroborate the real existence of the problem, as well as the evaluation of its impact, which revealed a positive transformation in pedagogical practice. The results provide a concrete and viable answer for the improvement of the evaluation of the teaching-methodological component of the Physical Education teacher, which constitutes an important material of guidance for methodologists and managers related to the instrumental cognitive, procedural and attitudinal performance , In order to conduct from the precedent knowledge, the new knowledge and lead to a formative process, with a contemporary vision, offering methodological resources to control the quality of Physical Education lessons.

  6. Bandwidth auction for SVC streaming in dynamic multi-overlay

    Science.gov (United States)

    Xiong, Yanting; Zou, Junni; Xiong, Hongkai

    2010-07-01

    In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.

  7. 3600 digital phase detector with 100-kHz bandwidth

    International Nuclear Information System (INIS)

    Reid, D.W.; Riggin, D.; Fazio, M.V.; Biddle, R.S.; Patton, R.D.; Jackson, H.A.

    1981-01-01

    The general availability of digital circuit components with propagation delay times of a few nanoseconds makes a digital phase detector with good bandwidth feasible. Such a circuit has a distinct advantage over its analog counterpart because of its linearity over wide range of phase shift. A phase detector that is being built at Los Alamos National Laboratory for the Fusion Materials Irradiation Test (FMIT) project is described. The specifications are 100-kHz bandwidth, linearity of +- 1 0 over +- 180 0 of phase shift, and 0.66 0 resolution. To date, the circuit has achieved the bandwidth and resolution. The linearity is approximately +- 3 0 over +- 180 0 phase shift

  8. Large bandwidth RGC transimpedance preamplifier design in SCA

    International Nuclear Information System (INIS)

    Wang Ke; Wang Zheng; Liu Zhen'an; Wei Wei; Lu Weiguo; Gary Varner

    2009-01-01

    A Large Bandwidth RGC Transimpedance Preamplifier is designed for amplifying the high-fidelity timing signal in Switch Capacitance Array chip application. This amplifier have characteristics of low input impedance, large bandwidth, high transimpedance. It will be made under TSMC 0.25μm CMOS technology, and the supply voltage is single 2.5 V. Simulation results indicate: the transimpedance is 5000 ohm, -3dB BW is 953 MHz, and the detector output capacitance have litter effect on the bandwidth in some range. (authors)

  9. Energy efficiency in elastic-bandwidth optical networks

    DEFF Research Database (Denmark)

    Vizcaino, Jorge Lopez; Ye, Yabin; Tafur Monroy, Idelfonso

    2011-01-01

    of elastic bandwidth allocation, opens new horizons in the operation of optical networks. In this paper, we compare the network planning problem in an elastic bandwidth CO-OFDM-based network and a fixed-grid WDM network. We highlight the benefits that bandwidth elasticity and the selection of different......The forecasted growth in the Internet traffic has made the operators and industry to be concerned about the power consumption of the networks, and to become interested in alternatives to plan and operate the networks in a more energy efficient manner. The introduction of OFDM, and its property...

  10. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Riley, Cameron; Hobbs, William

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  11. Feasibility, strategy, methodology, and analysis of probe measurements in plasma under high gas pressure

    Science.gov (United States)

    Demidov, V. I.; Koepke, M. E.; Kurlyandskaya, I. P.; Malkov, M. A.

    2018-02-01

    This paper reviews existing theories for interpreting probe measurements of electron distribution functions (EDF) at high gas pressure when collisions of electrons with atoms and/or molecules near the probe are pervasive. An explanation of whether or not the measurements are realizable and reliable, an enumeration of the most common sources of measurement error, and an outline of proper probe-experiment design elements that inherently limit or avoid error is presented. Additionally, we describe recent expanded plasma-condition compatibility for EDF measurement, including in applications of large wall probe plasma diagnostics. This summary of the authors’ experiences gained over decades of practicing and developing probe diagnostics is intended to inform, guide, suggest, and detail the advantages and disadvantages of probe application in plasma research.

  12. Accuracy requirements on operational measurements in nuclear power plants with regard to balance methodology

    International Nuclear Information System (INIS)

    Holecek, C.

    1986-01-01

    Accurate in-service measurement is necessary for power balancing of nuclear power plants, i.e., the determination of fuel consumption, electric power generation, heat delivery and the degree of fuel power utilization. The only possible method of determining the input of total consumed energy from the fuel is the balance of the primary coolant circuit. This is because for the purposes of power balancing it is not possible to measure the amount of power generated from nuclear fuel. Relations are presented for the calculation of basic indices of the power balance. It is stated that for the purposes of power balancing and analyses the precision of measuring instrument at the input and output of balancing circuits is of primary importance, followed by the precision of measuring instruments inside balancing circuits and meters of auxiliary parameters. (Z.M.). 7 refs., 1 tab

  13. High-bandwidth piezoresistive force probes with integrated thermal actuation

    International Nuclear Information System (INIS)

    Doll, Joseph C; Pruitt, Beth L

    2012-01-01

    We present high-speed force probes with on-chip actuation and sensing for the measurement of pN-scale forces at the microsecond timescale. We achieve a high resonant frequency in water (1–100 kHz) with requisite low spring constants (0.3–40 pN nm −1 ) and low integrated force noise (1–100 pN) by targeting probe dimensions on the order of 300 nm thick, 1–2 μm wide and 30–200 μm long. Forces are measured using silicon piezoresistors, while the probes are actuated thermally with an aluminum unimorph and silicon heater. The piezoresistive sensors are designed using the open-source numerical optimization code that incorporates constraints on operating temperature. Parylene passivation enables operation in ionic media and we demonstrate simultaneous actuation and sensing. The improved design and fabrication techniques that we describe enable a 10–20-fold improvement in force resolution or measurement bandwidth over prior piezoresistive cantilevers of comparable thickness. (paper)

  14. High bandwidth piezoresistive force probes with integrated thermal actuation

    Science.gov (United States)

    Doll, Joseph C.; Pruitt, Beth L.

    2012-01-01

    We present high-speed force probes with on-chip actuation and sensing for the measurement of pN-scale forces at the microsecond time scale. We achieve a high resonant frequency in water (1–100 kHz) with requisite low spring constants (0.3–40 pN/nm) and low integrated force noise (1–100 pN) by targeting probe dimensions on the order of 300 nm thick, 1–2 μm wide and 30–200 μm long. Forces are measured using silicon piezoresistors while the probes are actuated thermally with an aluminum unimorph and silicon heater. The piezoresistive sensors are designed using open source numerical optimization code that incorporates constraints on operating temperature. Parylene passivation enables operation in ionic media and we demonstrate simultaneous actuation and sensing. The improved design and fabrication techniques that we describe enable a 10–20 fold improvement in force resolution or measurement bandwidth over prior piezoresistive cantilevers of comparable thickness. PMID:23175616

  15. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    Science.gov (United States)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  16. Comparison of efficiency of distance measurement methodologies in mango (Mangifera indica) progenies based on physicochemical descriptors.

    Science.gov (United States)

    Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X

    2012-03-14

    We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.

  17. Methodology and measurement of radiation interception by quantum sensor of the oil palm plantation

    Directory of Open Access Journals (Sweden)

    Johari Endan

    2005-09-01

    Full Text Available Interception of light by a canopy is a fundamental requirement for crop growth and is important for biomass production and plant growth modeling. Solar radiation is an important parameter for photosynthesis and evapotranspiration. These two phenomena are dependent not only on the intensity of radiation but also on the distribution of intercepted radiation within the canopy. In this study, two operational methods for estimating the amount of photosynthetically active radiation (PAR intercepted by a canopy of the oil palm are presented. LICOR radiation sensors, model LI-190SA and model LI-191SA were used for photosynthetically active radiation (PAR measurement above and below the canopy. We developed two methods, namely "Triangular" method and "Circular" method for PAR measurement. Results show that both methods were suitable for oil palm PAR measurement. The triangular method is recommended for PAR measurements with respect to the whole plantation and the circular method is recommended for specific purposes, such as growth analysis or growth modeling of the oil palm. However, practical considerations such as equipment availability, purpose of the measurement, age of the palm, and the number of measuring points to be sampled should be taken into account in the selection of a suitable method for a particular study. The results indicate that the interception of radiation was affected by spatial variation, and the radiation transmission decreased towards the frond tips.

  18. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Directory of Open Access Journals (Sweden)

    C. Phillips-Smith

    2017-08-01

    Full Text Available The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010–November 2012 at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013, hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow

  19. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  20. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  1. Metabolic tumour volumes measured at staging in lymphoma: methodological evaluation on phantom experiments and patients

    International Nuclear Information System (INIS)

    Meignan, Michel; Sasanelli, Myriam; Itti, Emmanuel; Casasnovas, Rene Olivier; Luminari, Stefano; Fioroni, Federica; Coriani, Chiara; Masset, Helene; Gobbi, Paolo G.; Merli, Francesco; Versari, Annibale

    2014-01-01

    The presence of a bulky tumour at staging on CT is an independent prognostic factor in malignant lymphomas. However, its prognostic value is limited in diffuse disease. Total metabolic tumour volume (TMTV) determined on 18 F-FDG PET/CT could give a better evaluation of the total tumour burden and may help patient stratification. Different methods of TMTV measurement established in phantoms simulating lymphoma tumours were investigated and validated in 40 patients with Hodgkin lymphoma and diffuse large B-cell lymphoma. Data were processed by two nuclear medicine physicians in Reggio Emilia and Creteil. Nineteen phantoms filled with 18 F-saline were scanned; these comprised spherical or irregular volumes from 0.5 to 650 cm 3 with tumour-to-background ratios from 1.65 to 40. Volumes were measured with different SUVmax thresholds. In patients, TMTV was measured on PET at staging by two methods: volumes of individual lesions were measured using a fixed 41 % SUVmax threshold (TMTV 41 ) and a variable visually adjusted SUVmax threshold (TMTV var ). In phantoms, the 41 % threshold gave the best concordance between measured and actual volumes. Interobserver agreement was almost perfect. In patients, the agreement between the reviewers for TMTV 41 measurement was substantial (ρ c = 0.986, CI 0.97 - 0.99) and the difference between the means was not significant (212 ± 218 cm 3 for Creteil vs. 206 ± 219 cm 3 for Reggio Emilia, P = 0.65). By contrast the agreement was poor for TMTV var . There was a significant direct correlation between TMTV 41 and normalized LDH (r = 0.652, CI 0.42 - 0.8, P 41 , but high TMTV 41 could be found in patients with stage 1/2 or nonbulky tumour. Measurement of baseline TMTV in lymphoma using a fixed 41% SUVmax threshold is reproducible and correlates with the other parameters for tumour mass evaluation. It should be evaluated in prospective studies. (orig.)

  2. A methodology for interpretation of overcoring stress measurements in anisotropic rock

    International Nuclear Information System (INIS)

    Hakala, M.; Sjoeberg, J.

    2006-11-01

    The in situ state of stress is an important parameter for the design of a repository for final disposal of spent nuclear fuel. This report presents work conducted to improve the quality of overcoring stress measurements, focused on the interpretation of overcoring rock stress measurements when accounting for possible anisotropic behavior of the rock. The work comprised: (i) development/upgrading of a computer code for calculating stresses from overcoring strains for anisotropic materials and for a general overcoring probe configuration (up to six strain rosettes with six gauges each), (ii) development of a computer code for determining elastic constants for transversely isotropic rocks from biaxial testing, and (iii) analysis of case studies of selected overcoring measurements in both isotropic and anisotropic rocks from the Posiva and SKB sites in Finland and Sweden, respectively. The work was principally limited to transversely isotropic materials, although the stress calculation code is applicable also to orthotropic materials. The developed computer codes have been geared to work primarily with the Borre and CSIRO HI three-dimensional overcoring measurement probes. Application of the codes to selected case studies, showed that the developed tools were practical and useful for interpreting overcoring stress measurements conducted in anisotropic rock. A quantitative assessment of the effects of anisotropy may thus be obtained, which provides increased reliability in the stress data. Potential gaps in existing data and/or understanding can also be identified. (orig.)

  3. Measurements of air kerma index in computed tomography: a comparison among methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, T. C.; Mourao, A. P.; Da Silva, T. A., E-mail: alonso@cdtn.br [Universidade Federal de Minas Gerais, Programa de Ciencia y Tecnicas Nucleares, Av. Pres. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    Computed tomography (CT) has become the most important and widely used technique for diagnosis purpose. As CT exams impart high doses to patients in comparison to other radiologist techniques, reliable dosimetry is required. Dosimetry in CT is done in terms of air kerma index in air or in a phantom measured by a pencil ionization chamber under a single X-ray tube rotation. In this work, a comparison among CT dosimetric quantities measured by an UNFORS pencil ionization chamber, MTS-N RADOS thermoluminescent dosimeters and GAFCHROMIC XR-CT radiochromic film was done. The three dosimetric systems were properly calibrated in X-ray reference radiations in a calibration laboratory. CT dosimetric quantities were measured in CT Bright Speed GE Medical Systems Inc., scanner in a PMMA trunk phantom and a comparison among the three dosimetric techniques was done. (Author)

  4. A new methodology for measuring time correlations and excite states of atoms and nuclei

    International Nuclear Information System (INIS)

    Cavalcante, M.A.

    1989-01-01

    A system for measuring time correlation of physical phenomena events in the range of 10 -7 to 10 5 sec is proposed, and his results presented. This system, is based on a sequential time scale which is controlled by a precision quartz oscillator; the zero time of observation is set by means of a JK Flip-Flop, which is operated by a negative transition of pulse in coincidence with the pulse from a detector which marks the time zero of the event (precedent pulse). This electronic system (named digital chronoanalizer) was used in the measurement of excited states of nuclei as well as for the determination of time fluctuations in physical phenomena, such as the time lag in a halogen Geiger counter and is the measurement of the 60 KeV excited state of N P 237 . (author)

  5. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    Energy Technology Data Exchange (ETDEWEB)

    Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U. [Institute of Nuclear Physics, Cracow (Poland)

    1993-12-31

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.

  6. Measurements of air kerma index in computed tomography: a comparison among methodologies

    International Nuclear Information System (INIS)

    Alonso, T. C.; Mourao, A. P.; Da Silva, T. A.

    2016-10-01

    Computed tomography (CT) has become the most important and widely used technique for diagnosis purpose. As CT exams impart high doses to patients in comparison to other radiologist techniques, reliable dosimetry is required. Dosimetry in CT is done in terms of air kerma index in air or in a phantom measured by a pencil ionization chamber under a single X-ray tube rotation. In this work, a comparison among CT dosimetric quantities measured by an UNFORS pencil ionization chamber, MTS-N RADOS thermoluminescent dosimeters and GAFCHROMIC XR-CT radiochromic film was done. The three dosimetric systems were properly calibrated in X-ray reference radiations in a calibration laboratory. CT dosimetric quantities were measured in CT Bright Speed GE Medical Systems Inc., scanner in a PMMA trunk phantom and a comparison among the three dosimetric techniques was done. (Author)

  7. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    International Nuclear Information System (INIS)

    Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U.

    1993-01-01

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a 3 He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs

  8. Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system

    Energy Technology Data Exchange (ETDEWEB)

    Drozdowicz, K; Gabanska, B; Igielski, A; Krynicka, E; Woznicka, U [Institute of Nuclear Physics, Cracow (Poland)

    1994-12-31

    A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.

  9. Methodologies for the measurement of bone density and their precision and accuracy

    International Nuclear Information System (INIS)

    Goodwin, P.N.

    1987-01-01

    Radiographic methods of determining bone density have been available for many years, but recently most of the efforts in this field have focused on the development of instruments which would accurately and automatically measure bone density by absorption, or by the use of x-ray computed tomography (CT). Single energy absorptiometers using I-125 have been available for some years, and have been used primarily for measurements on the radius, although recently equipment for measuring the os calcis has become available. Accuracy of single energy measurements is about 3% to 5%; precision, which has been poor because of the difficulty of exact repositioning, has recently been improved by automatic methods so that it now approaches 1% or better. Dual energy sources offer the advantages of greater accuracy and the ability to measure the spine and other large bones. A number of dual energy scanners are now on the market, mostly using gadolinium-153 as a source. Dual energy scanning is capable of an accuracy of a few percent, but the precision when scanning patients can vary widely, due to the difficulty of comparing exactly the same areas; 2 to 4% would appear to be typical. Quantitative computed tomography (QCT) can be used to directly measure the trabecular bone within the vertebral body. The accuracy of single-energy QCT is affected by the amount of marrow fat present, which can lead to underestimations of 10% or more. An increase in marrow fat would cause an apparent decrease in bone mineral. However, the precision can be quite good, 1% or 2% on phantoms, and nearly as good on patients when four vertebrae are averaged. Dual energy scanning can correct for the presence of fat, but is less precise, and not available on all CT units. 52 references

  10. Methodological aspects to be considered in evaluating the economics of service measures

    International Nuclear Information System (INIS)

    Bald, M.

    1987-01-01

    For the purposes of the report, service measures is used as a term denoting all those steps which exceed the framework of normal in-service maintenance and repair and serve to improve economics over the normal case. Positive impacts are to be achieved on such parameters as availability, efficiency, and service life. One of the aspects investigated is the effect, if any, of such measures on the residual service life of plants in operation for a long period of time already. Residual service life in this case means the remaining span of effective technical and economic operation which, in these model calculations, also includes part of the period of depreciation. (orig.) [de

  11. High frequency measurement of P- and S-wave velocities on crystalline rock massif surface - methodology of measurement

    Science.gov (United States)

    Vilhelm, Jan; Slavík, Lubomír

    2014-05-01

    For the purpose of non-destructive monitoring of rock properties in the underground excavation it is possible to perform repeated high-accuracy P- and S-wave velocity measurements. This contribution deals with preliminary results gained during the preparation of micro-seismic long-term monitoring system. The field velocity measurements were made by pulse-transmission technique directly on the rock outcrop (granite) in Bedrichov gallery (northern Bohemia). The gallery at the experimental site was excavated using TBM (Tunnel Boring Machine) and it is used for drinking water supply, which is conveyed in a pipe. The stable measuring system and its automatic operation lead to the use of piezoceramic transducers both as a seismic source and as a receiver. The length of measuring base at gallery wall was from 0.5 to 3 meters. Different transducer coupling possibilities were tested namely with regard of repeatability of velocity determination. The arrangement of measuring system on the surface of the rock massif causes better sensitivity of S-transducers for P-wave measurement compared with the P-transducers. Similarly P-transducers were found more suitable for S-wave velocity determination then P-transducers. The frequency dependent attenuation of fresh rock massif results in limited frequency content of registered seismic signals. It was found that at the distance between the seismic source and receiver from 0.5 m the frequency components above 40 kHz are significantly attenuated. Therefore for the excitation of seismic wave 100 kHz transducers are most suitable. The limited frequency range should be also taken into account for the shape of electric impulse used for exciting of piezoceramic transducer. The spike pulse generates broad-band seismic signal, short in the time domain. However its energy after low-pass filtration in the rock is significantly lower than the energy of seismic signal generated by square wave pulse. Acknowledgments: This work was partially

  12. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    Science.gov (United States)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  13. METHODOLOGICAL APPROACH FOR MEASURING PRIORITY DBPS IN REVERSE OSMOSIS CONCENTRATED DRINKING WATER

    Science.gov (United States)

    Many disinfection by-products (DBPs) are formed when drinking water is chlorinated, but only a few are routinely measured or regulated. Various studies have revealed a plethora of DBPs for which sensitive and quantitative analytical methods have always been a major limiting facto...

  14. A new integrative methodology for desertification studies based on magnetic and short-lived radioisotope measurements

    International Nuclear Information System (INIS)

    Oldfield, F.; Higgitt, S.R.; Maher, B.A.; Appleby, P.G.; Scoullos, M.

    1986-01-01

    The use of mineral magnetic measurements and short-lived radioisotope studies with 210 Pb and 137 Cs is discussed within the ecosystem watershed conceptual framework. Used in conjunction with geomorphological, sedimentological, palaeoecological and geochemical techniques, these methods can form the core of an integrated multidisciplinary study of desertification and erosion processes on all relevant temporal and spatial scales. 30 refs.; 4 figs

  15. Cerebral blood measurements in cerebral vascular disease: methodological and clinical aspects

    International Nuclear Information System (INIS)

    Fieschi, C.; Lenzi, G.L.

    1982-01-01

    This paper is devoted mainly to studies performed on acute cerebral vascular disease with the invasive techniques for the measurement of regional cerebral blood flow (rCBF). The principles of the rCBF method are outlined and the following techniques are described in detail: xenon-133 inhalation method, xenon-133 intravenous method and emission tomography methods. (C.F.)

  16. Complete methodology on generating realistic wind speed profiles based on measurements

    DEFF Research Database (Denmark)

    Gavriluta, Catalin; Spataru, Sergiu; Mosincat, Ioan

    2012-01-01

    , wind modelling for medium and large time scales is poorly treated in the present literature. This paper presents methods for generating realistic wind speed profiles based on real measurements. The wind speed profile is divided in a low- frequency component (describing long term variations...

  17. A methodology to measure the effectiveness of academic recruitment and turnover

    DEFF Research Database (Denmark)

    Abramo, Giovanni; D’Angelo, Ciriaco Andrea; Rosati, Francesco

    2016-01-01

    We propose a method to measure the effectiveness of the recruitment and turnover of professors, in terms of their research performance. The method presented is applied tothe case of Italian universities over the period 2008–2012. The work then analyses thecorrelation between the indicators of eff...

  18. Automated landmark extraction for orthodontic measurement of faces using the 3-camera photogrammetry methodology.

    Science.gov (United States)

    Deli, Roberto; Di Gioia, Eliana; Galantucci, Luigi Maria; Percoco, Gianluca

    2010-01-01

    To set up a three-dimensional photogrammetric scanning system for precise landmark measurements, without any physical contact, using a low-cost and noninvasive digital photogrammetric solution, for supporting several necessity in clinical orthodontics and/or surgery diagnosis. Thirty coded targets were directly applied onto the subject's face on the soft tissue landmarks, and then, 3 simultaneous photos were acquired using photogrammetry, at room light conditions. For comparison, a dummy head was digitized both with a photogrammetric technique and with the laser scanner Minolta Vivid 910i (Konica Minolta, Tokyo, Japan). The precise measurement of the landmarks is ranged between 0.017 and 0.029 mm. The system automatically measures spatial position of face landmarks, from which distances and angles can be obtained. The facial measurements were compared with those done using laser scanning and manual caliper. The adopted method gives higher precision than the others (0.022-mm mean value on points and 0.038-mm mean value on linear distances on a dummy head), is simple, and can be used easily as a standard routine. The study demonstrated the validity of photogrammetry for accurate digitization of human face landmarks. This research points out the potential of this low-cost photogrammetry approach for medical digitization.

  19. Difference in blood pressure measurements between arms: methodological and clinical implications.

    Science.gov (United States)

    Clark, Christopher E

    2015-01-01

    Differences in blood pressure measurements between arms are commonly encountered in clinical practice. If such differences are not excluded they can delay the diagnosis of hypertension and can lead to poorer control of blood pressure levels. Differences in blood pressure measurements between arms are associated cross sectionally with other signs of vascular disease such as peripheral arterial disease or cerebrovascular disease. Differences are also associated prospectively with increased cardiovascular mortality and morbidity and all cause mortality. Numbers of publications on inter-arm difference are rising year on year, indicating a growing interest in the phenomenon. The prevalence of an inter-arm difference varies widely between reports, and is correlated with the underlying cardiovascular risk of the population studied. Prevalence is also sensitive to the method of measurement used. This review discusses the prevalence of an inter-arm difference in different populations and addresses current best practice for the detection and the measurement of a difference. The evidence for clinical and for vascular associations of an inter-arm difference is presented in considering the emerging role of an inter-arm blood pressure difference as a novel risk factor for increased cardiovascular morbidity and mortality. Competing aetiological explanations for an inter-arm difference are explored, and gaps in our current understanding of this sign, along with areas in need of further research, are considered.

  20. Minimizing measurement uncertainties of coniferous needle-leaf optical properties, part I: methodological review

    NARCIS (Netherlands)

    Yanez Rausell, L.; Schaepman, M.E.; Clevers, J.G.P.W.; Malenovsky, Z.

    2014-01-01

    Optical properties (OPs) of non-flat narrow plant leaves, i.e., coniferous needles, are extensively used by the remote sensing community, in particular for calibration and validation of radiative transfer models at leaf and canopy level. Optical measurements of such small living elements are,

  1. Metabolic tumour volumes measured at staging in lymphoma: methodological evaluation on phantom experiments and patients

    Energy Technology Data Exchange (ETDEWEB)

    Meignan, Michel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Paris-Est University, Service de Medecine Nucleaire, EAC CNRS 7054, Hopital Henri Mondor AP-HP, Creteil (France); Sasanelli, Myriam; Itti, Emmanuel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Casasnovas, Rene Olivier [CHU Le Bocage, Department of Hematology, Dijon (France); Luminari, Stefano [University of Modena and Reggio Emilia, Department of Diagnostic, Clinic and Public Health Medicine, Modena (Italy); Fioroni, Federica [Santa Maria Nuova Hospital-IRCCS, Department of Medical Physics, Reggio Emilia (Italy); Coriani, Chiara [Santa Maria Nuova Hospital-IRCCS, Department of Radiology, Reggio Emilia (Italy); Masset, Helene [Henri Mondor Hospital, Department of Radiophysics, Creteil (France); Gobbi, Paolo G. [University of Pavia, Department of Internal Medicine and Gastroenterology, Fondazione IRCCS Policlinico San Matteo, Pavia (Italy); Merli, Francesco [Santa Maria Nuova Hospital-IRCCS, Department of Hematology, Reggio Emilia (Italy); Versari, Annibale [Santa Maria Nuova Hospital-IRCCS, Department of Nuclear Medicine, Reggio Emilia (Italy)

    2014-06-15

    The presence of a bulky tumour at staging on CT is an independent prognostic factor in malignant lymphomas. However, its prognostic value is limited in diffuse disease. Total metabolic tumour volume (TMTV) determined on {sup 18}F-FDG PET/CT could give a better evaluation of the total tumour burden and may help patient stratification. Different methods of TMTV measurement established in phantoms simulating lymphoma tumours were investigated and validated in 40 patients with Hodgkin lymphoma and diffuse large B-cell lymphoma. Data were processed by two nuclear medicine physicians in Reggio Emilia and Creteil. Nineteen phantoms filled with {sup 18}F-saline were scanned; these comprised spherical or irregular volumes from 0.5 to 650 cm{sup 3} with tumour-to-background ratios from 1.65 to 40. Volumes were measured with different SUVmax thresholds. In patients, TMTV was measured on PET at staging by two methods: volumes of individual lesions were measured using a fixed 41 % SUVmax threshold (TMTV{sub 41}) and a variable visually adjusted SUVmax threshold (TMTV{sub var}). In phantoms, the 41 % threshold gave the best concordance between measured and actual volumes. Interobserver agreement was almost perfect. In patients, the agreement between the reviewers for TMTV{sub 41} measurement was substantial (ρ {sub c} = 0.986, CI 0.97 - 0.99) and the difference between the means was not significant (212 ± 218 cm{sup 3} for Creteil vs. 206 ± 219 cm{sup 3} for Reggio Emilia, P = 0.65). By contrast the agreement was poor for TMTV{sub var}. There was a significant direct correlation between TMTV{sub 41} and normalized LDH (r = 0.652, CI 0.42 - 0.8, P <0.001). Higher disease stages and bulky tumour were associated with higher TMTV{sub 41}, but high TMTV{sub 41} could be found in patients with stage 1/2 or nonbulky tumour. Measurement of baseline TMTV in lymphoma using a fixed 41% SUVmax threshold is reproducible and correlates with the other parameters for tumour mass evaluation

  2. Methodological Considerations and Comparisons of Measurement Results for Extracellular Proteolytic Enzyme Activities in Seawater

    Directory of Open Access Journals (Sweden)

    Yumiko Obayashi

    2017-10-01

    Full Text Available Microbial extracellular hydrolytic enzymes that degrade organic matter in aquatic ecosystems play key roles in the biogeochemical carbon cycle. To provide linkages between hydrolytic enzyme activities and genomic or metabolomic studies in aquatic environments, reliable measurements are required for many samples at one time. Extracellular proteases are one of the most important classes of enzymes in aquatic microbial ecosystems, and protease activities in seawater are commonly measured using fluorogenic model substrates. Here, we examined several concerns for measurements of extracellular protease activities (aminopeptidases, and trypsin-type, and chymotrypsin-type activities in seawater. Using a fluorometric microplate reader with low protein binding, 96-well microplates produced reliable enzymatic activity readings, while use of regular polystyrene microplates produced readings that showed significant underestimation, especially for trypsin-type proteases. From the results of kinetic experiments, this underestimation was thought to be attributable to the adsorption of both enzymes and substrates onto the microplate. We also examined solvent type and concentration in the working solution of oligopeptide-analog fluorogenic substrates using dimethyl sulfoxide (DMSO and 2-methoxyethanol (MTXE. The results showed that both 2% (final concentration of solvent in the mixture of seawater sample and substrate working solution DMSO and 2% MTXE provide similarly reliable data for most of the tested substrates, except for some substrates which did not dissolve completely in these assay conditions. Sample containers are also important to maintain the level of enzyme activity in natural seawater samples. In a small polypropylene containers (e.g., standard 50-mL centrifugal tube, protease activities in seawater sample rapidly decreased, and it caused underestimation of natural activities, especially for trypsin-type and chymotrypsin-type proteases. In

  3. Bandwidth Estimation in Wireless Lans for Multimedia Streaming Services

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2007-01-01

    Full Text Available The popularity of multimedia streaming services via wireless networks presents major challenges in the management of network bandwidth. One challenge is to quickly and precisely estimate the available bandwidth for the decision of streaming rates of layered and scalable multimedia services. Previous studies based on wired networks are too burdensome to be applied to multimedia applications in wireless networks. In this paper, a new method, IdleGap, is suggested to estimate the available bandwidth of a wireless LAN based on the information from a low layer in the protocol stack. We use a network simulation tool, NS-2, to evaluate our new method with various ranges of cross-traffic and observation times. Our simulation results show that IdleGap accurately estimates the available bandwidth for all ranges of cross-traffic (100 Kbps ∼ 1 Mbps with a very short observation time of 10 seconds.

  4. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    National Research Council Canada - National Science Library

    Wilcox, D. R

    2007-01-01

    Intermittent low-bandwidth communication environments, such as those encountered in U.S. Navy tactical radio and satellite links, have special requirements that do not pertain to commercial applications...

  5. Bandwidth allocation and pricing problem for a duopoly market

    Directory of Open Access Journals (Sweden)

    You Peng-Sheng

    2011-01-01

    Full Text Available This research discusses the Internet service provider (ISP bandwidth allocation and pricing problems for a duopoly bandwidth market with two competitive ISPs. According to the contracts between Internet subscribers and ISPs, Internet subscribers can enjoy their services up to their contracted bandwidth limits. However, in reality, many subscribers may experience the facts that their on-line requests are denied or their connection speeds are far below their contracted speed limits. One of the reasons is that ISPs accept too many subscribers as their subscribers. To avoid this problem, ISPs can set limits for their subscribers to enhance their service qualities. This paper develops constrained nonlinear programming to deal with this problem for two competitive ISPs. The condition for reaching the equilibrium between the two competitive firms is derived. The market equilibrium price and bandwidth resource allocations are derived as closed form solutions.

  6. modeling the effect of bandwidth allocation on network performance

    African Journals Online (AJOL)

    Using MATLAB, simulations were then .... of the network resource. Network bandwidth design, simulation, and management ... encoder processes longer signal data blocks, which entails longer ... c is the modulated signal carrier. However, the ...

  7. Bandwidth-dependent transformation of noise data f

    OpenAIRE

    P. Bormann;  

    1998-01-01

    Additional keywords: bandwidth dependence amplitudes, Dynamikbereich, Frequenzband, relative Bandbreite, spektrale Leistungsdichte der Bodenunruhe, Darstellungen der Bodenunruhespektren in verschiedenen kinematischen Einheiten, Transformation kinematischer Einheiten, Transformation von Spektren in Amplituden der Bodenbewegung, Abhaengigkeit der Amplituden von der Bandbreite

  8. Current status and methodological aspects on the measurement of glomerular filtration rate

    International Nuclear Information System (INIS)

    Froissart, M.; Hignette, C.; Kolar, P.; Prigent, A.; Paillard, M.

    1995-01-01

    Determination of the glomerular filtration rate (GFR) contribute to our understanding of kidney physiology and pathophysiology . Moreover, determination of GFR is of clinical importance in assessing the diagnosis and the progression of renal disease. The purpose of this article is to review the technical performance and results of GFR measurements, including the classical inulin clearance technique and more recent alternative clearance techniques using radioisotope-labelled filtration markers, bolus infusion and spontaneous bladder emptying. Some simplified techniques avoiding urinary collection are also described. We conclude that estimation of GFR from renal and in some cases plasmatic clearances is accurate and more convenient than the classical inulin clearance technique. Such measurements of GFR should be included both in clinical practice and clinical research. (authors). 80 refs., 5 figs., 1 tab

  9. Critical experiments, measurements, and analyses to establish a crack arrest methodology for nuclear pressure vessel steels

    International Nuclear Information System (INIS)

    Hahn, G.T.

    1977-01-01

    Substantial progress was made in three important areas: crack propagation and arrest theory, two-dimensional dynamic crack propagation analyses, and a laboratory test method for the material property data base. The major findings were as follows: Measurements of run-arrest events lent support to the dynamic, energy conservation theory of crack arrest. A two-dimensional, dynamic, finite-difference analysis, including inertia forces and thermal gradients, was developed. The analysis was successfully applied to run-arrest events in DCB (double-cantilever-beam) and SEN (single-edge notched) test pieces. A simplified procedure for measuring K/sub D/ and K/sub Im/ values with ordinary and duplex DCB specimens was demonstrated. The procedure employs a dynamic analysis of the crack length at arrest and requires no special instrumentation. The new method was applied to ''duplex'' specimens to measure the large K/sub D/ values displayed by A533B steel above the nil-ductility temperature. K/sub D/ crack velocity curves and K/sub Im/ values of two heats of A533B steel and the corresponding values for the plane strain fracture toughness associated with static initiation (K/sub Ic/), dynamic initiation (K/sub Id/), and the static stress intensity at crack arrest (K/sub Ia/) were measured. Possible relations among these toughness indices are identified. During the past year the principal investigators of the participating groups reached agreement on a crack arrest theory appropriate for the pressure vessel problem. 7 figures

  10. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    Science.gov (United States)

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  11. Bone mineral content measurement in small infants by single-photon absorptiometry: current methodologic issues

    International Nuclear Information System (INIS)

    Steichen, J.J.; Asch, P.A.; Tsang, R.C.

    1988-01-01

    Single-photon absorptiometry (SPA), developed in 1963 and adapted for infants by Steichen et al. in 1976, is an important tool to quantitate bone mineralization in infants. Studies of infants in which SPA was used include studies of fetal bone mineralization and postnatal bone mineralization in very low birth weight infants. The SPA technique has also been used as a research tool to investigate longitudinal bone mineralization and to study the effect of nutrition and disease processes such as rickets or osteopenia of prematurity. At present, it has little direct clinical application for diagnosing bone disease in single patients. The bones most often used to measure bone mineral content (BMC) are the radius, the ulna, and, less often, the humerus. The radius appears to be preferred as a suitable bone to measure BMC in infants. It is easily accessible; anatomic reference points are easily palpated and have a constant relationship to the radial mid-shaft site; soft tissue does not affect either palpation of anatomic reference points or BMC quantitation in vivo. The peripheral location of the radius minimizes body radiation exposure. Trabecular and cortical bone can be measured separately. Extensive background studies exist on radial BMC in small infants. Most important, the radius has a relatively long zone of constant BMC. Finally, SPA for BMC in the radius has a high degree of precision and accuracy. 61 references

  12. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Directory of Open Access Journals (Sweden)

    Hamid Reza Marateb

    2014-01-01

    Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.

  13. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Science.gov (United States)

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  14. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    Science.gov (United States)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  15. Dual photon absorptiometry measurement of the lumbar bone mineral content. Methodology - Reproductibility - Normal values

    International Nuclear Information System (INIS)

    Braillon, P.; Duboeuf, F.; Delmas, P.D.; Meunier, P.J.

    1987-01-01

    Measurements were made with a DPA apparatus (Novo Lab 22a) on different phantoms and on volunteers in an attempt to evaluate the system precision. The reproductibility was found in the range of 0.98 to 4.10 % in the case of in vitro measurements, depending on the geometry of the phantoms used, and in the range of 1.6 to 2.94 % for volunteers after repositioning. Secondly, the BMD in the lumbar spine of normal women and normal men was estimated. In control females, the BMD is well fitted to the age by a cubic regression. The maximum value of the BMD is found in this case at the age of 31.5 and the maximum rate of bone loss takes place at 57. Total bone loss between 31.5 and the elderly is about 32 %. In control males, results are more scattered and are represented by a simple linear regression. The average mineral loss between 30 and 80 years is 11.5 % in this area of measurement [fr

  16. Measurement of leukocyte rheology in vascular disease: clinical rationale and methodology. International Society of Clinical Hemorheology.

    Science.gov (United States)

    Wautier, J L; Schmid-Schönbein, G W; Nash, G B

    1999-01-01

    The measurement of leukocyte rheology in vascular disease is a recent development with a wide range of new opportunities. The International Society of Clinical Hemorheology has asked an expert panel to propose guidelines for the investigation of leukocyte rheology in clinical situations. This article first discusses the mechanical, adhesive and related functional properties of leukocytes (especially neutrophils) which influence their circulation, and establishes the rationale for clinically-related measurements of parameters which describe them. It is concluded that quantitation of leukocyte adhesion molecules, and of their endothelial receptors may assist understanding of leukocyte behaviour in vascular disease, along with measurements of flow resistance of leukocytes, free radical production, degranulation and gene expression. For instance, vascular cell adhesion molecule (VCAM-1) is abnormally present on endothelial cells in atherosclerosis, diabetes mellitus and inflammatory conditions. Soluble forms of intercellular adhesion molecule (ICAM-1) or VCAM can be found elevated in the blood of patients with rheumatoid arthritis or infections disease. In the second part of the article, possible technical approaches are presented and possible avenues for leukocyte rheological investigations are discussed.

  17. Methodological issues in systematic reviews of headache trials: adapting historical diagnostic classifications and outcome measures to present-day standards.

    Science.gov (United States)

    McCrory, Douglas C; Gray, Rebecca N; Tfelt-Hansen, Peer; Steiner, Timothy J; Taylor, Frederick R

    2005-05-01

    Recent efforts to make headache diagnostic classification and clinical trial methodology more consistent provide valuable advice to trialists generating new evidence on effectiveness of treatments for headache; however, interpreting older trials that do not conform to new standards remains problematic. Systematic reviewers seeking to utilize historical data can adapt currently recommended diagnostic classification and clinical trial methodological approaches to interpret all available data relative to current standards. In evaluating study populations, systematic reviewers can: (i) use available data to attempt to map study populations to diagnoses in the new International Classification of Headache Disorders; and (ii) stratify analyses based on the extent to which study populations are precisely specified. In evaluating outcome measures, systematic reviewers can: (i) summarize prevention studies using headache frequency, incorporating headache index in a stratified analysis if headache frequency is not available; (ii) summarize acute treatment studies using pain-free response as reported in directly measured headache improvement or headache severity outcomes; and (iii) avoid analysis of recurrence or relapse data not conforming to the sustained pain-free response definition.

  18. Methodology for measurement of diesel particle size distributions from a city bus working in real traffic conditions

    International Nuclear Information System (INIS)

    Armas, O; Gómez, A; Mata, C

    2011-01-01

    The study of particulate matter (PM) and nitrogen oxides emissions of diesel engines is nowadays a necessary step towards pollutant emission reduction. For a complete evaluation of PM emissions and its size characterization, one of the most challenging goals is to adapt the available techniques and the data acquisition procedures to the measurement and to propose a methodology for the interpretation of instantaneous particle size distributions (PSD) of combustion-derived particles produced by a vehicle during real driving conditions. In this work, PSD from the exhaust gas of a city bus operated in real driving conditions with passengers have been measured. For the study, the bus was equipped with a rotating disk diluter coupled to an air supply thermal conditioner (with an evaporating tube), the latter being connected to a TSI Engine Exhaust Particle Sizer spectrometer. The main objective of this work has been to propose an alternative procedure for evaluating the influence of several transient sequences on PSD emitted by a city bus used in real driving conditions with passengers. The transitions studied were those derived from the combination of four possible sequences or categories during real driving conditions: idle, acceleration, deceleration with fuel consumption and deceleration without fuel consumption. The analysis methodology used in this work proved to be a useful tool for a better understanding of the phenomena related to the determination of PSD emitted by a city bus during real driving conditions with passengers

  19. Comparing Classic and Interval Analytical Hierarchy Process Methodologies for Measuring Area-Level Deprivation to Analyze Health Inequalities.

    Science.gov (United States)

    Cabrera-Barona, Pablo; Ghorbanzadeh, Omid

    2018-01-16

    Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas.

  20. Comparison of noise power spectrum methodologies in measurements by using various electronic portal imaging devices in radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Son, Soon Yong [Dept. of Radiological Technology, Wonkwang Health Science University, Iksan (Korea, Republic of); Choi, Kwan Woo [Dept. of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [Dept. of Radiological Technology, Baekseok Culture University College, Cheonan (Korea, Republic of); Kwon, Kyung Tae [Dep. of Radiological Technology, Dongnam Health University, Suwon (Korea, Republic of); Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Lee, Young Ah; Son, Jin Hyun; Min, Jung Whan [Shingu University College, Sungnam (Korea, Republic of)

    2016-03-15

    The noise power spectrum (NPS) is one of the most general methods for measuring the noise amplitude and the quality of an image acquired from a uniform radiation field. The purpose of this study was to compare different NPS methodologies by using megavoltage X-ray energies. The NPS evaluation methods in diagnostic radiation were applied to therapy using the International Electro-technical Commission standard (IEC 62220-1). Various radiation therapy (RT) devices such as TrueBeamTM(Varian), BEAMVIEWPLUS(Siemens), iViewGT(Elekta) and ClinacR iX (Varian) were used. In order to measure the region of interest (ROI) of the NPS, we used the following four factors: the overlapping impact, the non-overlapping impact, the flatness and penumbra. As for NPS results, iViewGT(Elekta) had the higher amplitude of noise, compared to BEAMVIEWPLUS (Siemens), TrueBeamTM(Varian) flattening filter, ClinacRiXaS1000(Varian) and TrueBeamTM(Varian) flattening filter free. The present study revealed that various factors could be employed to produce megavoltage imaging (MVI) of the NPS and as a baseline standard for NPS methodologies control in MVI.

  1. Fiber-Optic Temperature and Pressure Sensors Applied to Radiofrequency Thermal Ablation in Liver Phantom: Methodology and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2015-01-01

    Full Text Available Radiofrequency thermal ablation (RFA is a procedure aimed at interventional cancer care and is applied to the treatment of small- and midsize tumors in lung, kidney, liver, and other tissues. RFA generates a selective high-temperature field in the tissue; temperature values and their persistency are directly related to the mortality rate of tumor cells. Temperature measurement in up to 3–5 points, using electrical thermocouples, belongs to the present clinical practice of RFA and is the foundation of a physical model of the ablation process. Fiber-optic sensors allow extending the detection of biophysical parameters to a vast plurality of sensing points, using miniature and noninvasive technologies that do not alter the RFA pattern. This work addresses the methodology for optical measurement of temperature distribution and pressure using four different fiber-optic technologies: fiber Bragg gratings (FBGs, linearly chirped FBGs (LCFBGs, Rayleigh scattering-based distributed temperature system (DTS, and extrinsic Fabry-Perot interferometry (EFPI. For each instrument, methodology for ex vivo sensing, as well as experimental results, is reported, leading to the application of fiber-optic technologies in vivo. The possibility of using a fiber-optic sensor network, in conjunction with a suitable ablation device, can enable smart ablation procedure whereas ablation parameters are dynamically changed.

  2. Methodology for measurement of diesel particle size distributions from a city bus working in real traffic conditions

    Science.gov (United States)

    Armas, O.; Gómez, A.; Mata, C.

    2011-10-01

    The study of particulate matter (PM) and nitrogen oxides emissions of diesel engines is nowadays a necessary step towards pollutant emission reduction. For a complete evaluation of PM emissions and its size characterization, one of the most challenging goals is to adapt the available techniques and the data acquisition procedures to the measurement and to propose a methodology for the interpretation of instantaneous particle size distributions (PSD) of combustion-derived particles produced by a vehicle during real driving conditions. In this work, PSD from the exhaust gas of a city bus operated in real driving conditions with passengers have been measured. For the study, the bus was equipped with a rotating disk diluter coupled to an air supply thermal conditioner (with an evaporating tube), the latter being connected to a TSI Engine Exhaust Particle Sizer spectrometer. The main objective of this work has been to propose an alternative procedure for evaluating the influence of several transient sequences on PSD emitted by a city bus used in real driving conditions with passengers. The transitions studied were those derived from the combination of four possible sequences or categories during real driving conditions: idle, acceleration, deceleration with fuel consumption and deceleration without fuel consumption. The analysis methodology used in this work proved to be a useful tool for a better understanding of the phenomena related to the determination of PSD emitted by a city bus during real driving conditions with passengers.

  3. TPROXY dan FILTERING SEBAGAI METODE OPTIMASI PEMAKAIAN BANDWIDTH INTERNET

    Directory of Open Access Journals (Sweden)

    Sutiyo Sutiyo

    2015-04-01

    Full Text Available Until now the use of the internet disuatu agencies viewed from the initial planning, use and care, even long-term planning is still not optimal. The main factor lies in the existing human resources, especially in IT personnel and policy makers who do not understand or do not even know at all how your use of the Internet well, optimal, efficient, particularly associated with the consumption of Internet bandwidth. Internet bandwidth capacity greatly affect the speed of web access or other Internet applications. So we need a plan and maintance necessary to obtain good Internet bandwidth consumption efficiency and guaranteed QoS, such as by utilizing Tproxy and filtering methods. TProxy is a development of which has been patched Squid, a proxy to be able to pass traffic without NAT (Network Access Translation. Fitering a firewall system which is used for filtering data packets that do not want filtered dikendaki or akhirmya able to minimize traffic on bandwidth usage. Tproxy and filtering runs on the Linux platform. Linux distributions are often used to Tproxy is a variant Debian, Centos and then to use Mikrotik Filtering. At the end of each request or query from the client or the response of the proxy server does not have meaningful constraints, the bandwidth between the client and the proxy server does not happen limitation, bandwidth capable of running close to capacity including 10 Mbps Ethernet, 100 Mbps, and even a GPS (full speed.

  4. Translation and linguistic validation of the Pediatric Patient-Reported Outcomes Measurement Information System measures into simplified Chinese using cognitive interviewing methodology.

    Science.gov (United States)

    Liu, Yanyan; Hinds, Pamela S; Wang, Jichuan; Correia, Helena; Du, Shizheng; Ding, Jian; Gao, Wen Jun; Yuan, Changrong

    2013-01-01

    The Pediatric Patient-Reported Outcomes Measurement Information System (PROMIS) measures were developed using modern measurement theory and tested in a variety of settings to assess the quality of life, function, and symptoms of children and adolescents experiencing a chronic illness and its treatment. Developed in English, this set of measures had not been translated into Chinese. The objective of this study was to develop the Chinese version of the Pediatric PROMIS measures (C-Ped-PROMIS), specifically 8 short forms, and to pretest the translated measures in children and adolescents through cognitive interviewing methodology. The C-Ped-PROMIS was developed following the standard Functional Assessment of Chronic Illness Therapy Translation Methodology. Bilingual teams from the United States and China reviewed the translation to develop a provisional version, which was then pretested with cognitive interview by probing 10 native Chinese-speaking children aged 8 to 17 years in China. The translation was finalized by the bilingual teams. Most items, response options, and instructions were well understood by the children, and some revisions were made to address patient's comments during the cognitive interview. The results indicated that the C-Ped-PROMIS items were semantically and conceptually equivalent to the original. Children aged 8 to 17 years in China were able to comprehend these measures and express their experience and feelings about illness or their life. The C-Ped-PROMIS is available for psychometric validation. Future work will be directed at translating the rest of the item banks, calibrating them and creating a Chinese final version of the short forms.

  5. An EPR methodology for measuring the London penetration depth for the ceramic superconductors

    Science.gov (United States)

    Rakvin, B.; Mahl, T. A.; Dalal, N. S.

    1990-01-01

    The use is discussed of electron paramagnetic resonance (EPR) as a quick and easily accessible method for measuring the London penetration depth, lambda for the high T(sub c) superconductors. The method utilizes the broadening of the EPR signal, due to the emergence of the magnetic flux lattice, of a free radical adsorbed on the surface of the sample. The second moment, of the EPR signal below T(sub c) is fitted to the Brandt equation for a simple triangular lattice. The precision of this method compares quite favorably with those of the more standard methods such as micro sup(+)SR, Neutron scattering, and magnetic susceptibility.

  6. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    Science.gov (United States)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  7. In situ measurement of heavy metals in water using portable EDXRF and APDC pre-concentration methodology

    International Nuclear Information System (INIS)

    Melquiades, Fabio L.; Parreira, Paulo S.; Appoloni, Carlos R.; Silva, Wislley D.; Lopes, Fabio

    2007-01-01

    With the objective of identify and quantify metals in water and obtain results in the sampling place, Energy Dispersive X-Ray Fluorescence (EDXRF) methodology with a portable equipment was employed. In this work are presented metal concentration results for water samples from two points of Londrina city. The analysis were in situ, measuring in natura water and samples pre-concentrated in membranes. The work consisted on the use of a portable X-ray tube to excite the samples and a Si-Pin detector with the standard data acquisition electronics to register the spectra. The samples were filtered in membranes for suspended particulate matter retention. After this APDC precipitation methodology was applied for sample pre-concentration with posterior filtering in membranes. For in natura samples were found concentrations of total iron in Capivara River 254 ± 30 mg L -1 and at Igapo Lake 63 ± 9 mg L -1 . For membrane measurements, the results for particulate suspended matter at Capivara River were, in mg L -1 : 31.0 ± 2.5 (Fe), 0.17 ± 0.03 (Cu) and 0.93 ± 0.08 (Pb) and for dissolved iron was 0.038 ± 0.004. For Igapo Lake just Fe was quantified: 1.66 ±0.19 mg L -1 for particulate suspended iron and 0.79 ± 0.11 mg L -1 for dissolved iron. In 4 h of work at field it was possible to filter 14 membranes and measure around 16 samples. The performance of the equipment was very good and the results are satisfactory for in situ measurements employing a portable instrument. (author)

  8. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    Science.gov (United States)

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA

  9. Methodologically controlled variations in laboratory and field pH measurements in waterlogged soils

    DEFF Research Database (Denmark)

    Elberling, Bo; Matthiesen, Henning

    2007-01-01

    artefacts is critical. But the study includes agricultural and forest soils for comparison. At a waterlogged site, Laboratory results were compared with three different field methods: calomel pH probes inserted in the soil from pits, pH measurements of soil solution extracted from the soil, and pH profiles...... using a solid-state pH electrode pushed into the soil from the surface. Comparisons between in situ and laboratory methods revealed differences of more than 1 pH unit. The content of dissolved ions in soil solution and field observations of O2 and CO2 concentrations were used in the speciation model...... PHREEQE in order to predict gas exchange processes. Changes in pH in soil solution following equilibrium in the laboratory could be explained mainly by CO2 degassing. Only soil pH measured in situ using either calomel or solid-state probes inserted directly into the soil was not affected by gas exchange...

  10. The organizational stress measure: an integrated methodology for assessing job-stress and targeting organizational interventions.

    Science.gov (United States)

    Spurgeon, Peter; Mazelan, Patti; Barwell, Fred

    2012-02-01

    This paper briefly describes the OSM (Organizational Stress Measure) which was developed over a decade ago and has evolved to become a well-established practical method not only for assessing wellbeing at work but also as a cost-effective strategy to tackle workplace stress. The OSM measures perceived organizational pressures and felt individual strains within the same instrument, and provides a rich and subtle picture of both the organizational culture and the personal perspectives of the constituent staff groups. There are many types of organizational pressure that may impact upon the wellbeing and potential effectiveness of staff including skill shortages, ineffective strategic planning and poor leadership, and these frequently result in reduced performance, absenteeism, high turnover and poor staff morale. These pressures may increase the probability of some staff reacting negatively and research with the OSM has shown that increased levels of strain for small clusters of staff may be a leading indicator of future organizational problems. One of the main benefits of using the OSM is the ability to identify 'hot-spots', where organizational pressures are triggering high levels of personal strain in susceptible clusters of staff. In this way, the OSM may act as an 'early warning alarm' for potential organizational problems.

  11. Practical appraisal of sustainable development-Methodologies for sustainability measurement at settlement level

    International Nuclear Information System (INIS)

    Moles, Richard; Foley, Walter; Morrissey, John; O'Regan, Bernadette

    2008-01-01

    This paper investigates the relationships between settlement size, functionality, geographic location and sustainable development. Analysis was carried out on a sample of 79 Irish settlements, located in three regional clusters. Two methods were selected to model the level of sustainability achieved in settlements, namely, Metabolism Accounting and Modelling of Material and Energy Flows (MA) and Sustainable Development Index Modelling. MA is a systematic assessment of the flows and stocks of material within a system defined in space and time. The metabolism of most settlements is essentially linear, with resources flowing through the urban system. The objective of this research on material and energy flows was to provide information that might aid in the development of a more circular pattern of urban metabolism, vital to sustainable development. In addition to MA, a set of forty indicators were identified and developed. These target important aspects of sustainable development: transport, environmental quality, equity and quality of life issues. Sustainability indices were derived through aggregation of indicators to measure dimensions of sustainable development. Similar relationships between settlement attributes and sustainability were found following both methods, and these were subsequently integrated to provide a single measure. Analysis identified those attributes of settlements preventing, impeding or promoting progress towards sustainability

  12. Impact of crystal orientation on the modulation bandwidth of InGaN/GaN light-emitting diodes

    Science.gov (United States)

    Monavarian, M.; Rashidi, A.; Aragon, A. A.; Oh, S. H.; Rishinaramangalam, A. K.; DenBaars, S. P.; Feezell, D.

    2018-01-01

    High-speed InGaN/GaN blue light-emitting diodes (LEDs) are needed for future gigabit-per-second visible-light communication systems. Large LED modulation bandwidths are typically achieved at high current densities, with reports close to 1 GHz bandwidth at current densities ranging from 5 to 10 kA/cm2. However, the internal quantum efficiency (IQE) of InGaN/GaN LEDs is quite low at high current densities due to the well-known efficiency droop phenomenon. Here, we show experimentally that nonpolar and semipolar orientations of GaN enable higher modulation bandwidths at low current densities where the IQE is expected to be higher and power dissipation is lower. We experimentally compare the modulation bandwidth vs. current density for LEDs on nonpolar (10 1 ¯ 0 ), semipolar (20 2 ¯ 1 ¯) , and polar (" separators="|0001 ) orientations. In agreement with wavefunction overlap considerations, the experimental results indicate a higher modulation bandwidth for the nonpolar and semipolar LEDs, especially at relatively low current densities. At 500 A/cm2, the nonpolar LED has a 3 dB bandwidth of ˜1 GHz, while the semipolar and polar LEDs exhibit bandwidths of 260 MHz and 75 MHz, respectively. A lower carrier density for a given current density is extracted from the RF measurements for the nonpolar and semipolar LEDs, consistent with the higher wavefunction overlaps in these orientations. At large current densities, the bandwidth of the polar LED approaches that of the nonpolar and semipolar LEDs due to coulomb screening of the polarization field. The results support using nonpolar and semipolar orientations to achieve high-speed LEDs at low current densities.

  13. Theoretical and methodological approaches to the problem of students' health in algorithms of recreation measures.

    Directory of Open Access Journals (Sweden)

    Zaytzev V.P.

    2011-01-01

    Full Text Available In the article is expounded about health and its basic constituents: physical, psychical and social. Description is given to physical development of man and its physical preparedness, physical form and trained, physical activity and functional readiness. Opinions and looks of scientists, teachers and doctors are presented on determination of health of man, including student. All of these symptoms are taken into account from point of recreation measures. Description of determination of recreation, physical recreation and other concept of recreation systems is given. It is shown historical information about both determination of health and recreation, and also participation of higher educational establishments of physical culture of Ukraine, Russia and Poland, which is working under this problem, in determination of health and recreation.

  14. Methodological considerations for global analysis of cellular FLIM/FRET measurements

    Science.gov (United States)

    Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.

    2012-02-01

    Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.

  15. Bandwidth Limitations in Characterization of High Intensity Focused Ultrasound Fields in the Presence of Shocks

    Science.gov (United States)

    Khokhlova, V. A.; Bessonova, O. V.; Soneson, J. E.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-03-01

    Nonlinear propagation effects result in the formation of weak shocks in high intensity focused ultrasound (HIFU) fields. When shocks are present, the wave spectrum consists of hundreds of harmonics. In practice, shock waves are modeled using a finite number of harmonics and measured with hydrophones that have limited bandwidths. The goal of this work was to determine how many harmonics are necessary to model or measure peak pressures, intensity, and heat deposition rates of the HIFU fields. Numerical solutions of the Khokhlov-Zabolotskaya-Kuznetzov-type (KZK) nonlinear parabolic equation were obtained using two independent algorithms, compared, and analyzed for nonlinear propagation in water, in gel phantom, and in tissue. Measurements were performed in the focus of the HIFU field in the same media using fiber optic probe hydrophones of various bandwidths. Experimental data were compared to the simulation results.

  16. Development of a cognitive bias methodology for measuring low mood in chimpanzees

    Directory of Open Access Journals (Sweden)

    Melissa Bateson

    2015-06-01

    Full Text Available There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure ‘pessimistic’ bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes. The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107–240 trials, and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess

  17. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    Science.gov (United States)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  18. Measuring sporadic gastrointestinal illness associated with drinking water - an overview of methodologies.

    Science.gov (United States)

    Bylund, John; Toljander, Jonas; Lysén, Maria; Rasti, Niloofar; Engqvist, Jannes; Simonsson, Magnus

    2017-06-01

    There is an increasing awareness that drinking water contributes to sporadic gastrointestinal illness (GI) in high income countries of the northern hemisphere. A literature search was conducted in order to review: (1) methods used for investigating the effects of public drinking water on GI; (2) evidence of possible dose-response relationship between sporadic GI and drinking water consumption; and (3) association between sporadic GI and factors affecting drinking water quality. Seventy-four articles were selected, key findings and information gaps were identified. In-home intervention studies have only been conducted in areas using surface water sources and intervention studies in communities supplied by ground water are therefore needed. Community-wide intervention studies may constitute a cost-effective alternative to in-home intervention studies. Proxy data that correlate with GI in the community can be used for detecting changes in the incidence of GI. Proxy data can, however, not be used for measuring the prevalence of illness. Local conditions affecting water safety may vary greatly, making direct comparisons between studies difficult unless sufficient knowledge about these conditions is acquired. Drinking water in high-income countries contributes to endemic levels of GI and there are public health benefits for further improvements of drinking water safety.

  19. Methodological review: measured and reported congruence between preferred and actual place of death.

    Science.gov (United States)

    Bell, C L; Somogyi-Zalud, E; Masaki, K H

    2009-09-01

    Congruence between preferred and actual place of death is an important palliative care outcome reported in the literature. We examined methods of measuring and reporting congruence to highlight variations impairing cross-study comparisons. Medline, PsychInfo, CINAHL, and Web of Science were systematically searched for clinical research studies examining patient preference and congruence as an outcome. Data were extracted into a matrix, including purpose, reported congruence, and method for eliciting preference. Studies were graded for quality. Using tables of preferred versus actual places of death, an overall congruence (total met preferences out of total preferences) and a kappa statistic of agreement were determined for each study. Twelve studies were identified. Percentage of congruence was reported using four different definitions. Ten studies provided a table or partial table of preferred versus actual deaths for each place. Three studies provided kappa statistics. No study achieved better than moderate agreement when analysed using kappa statistics. A study which elicited ideal preference reported the lowest agreement, while longitudinal studies reporting final preferred place of death yielded the highest agreement (moderate agreement). Two other studies of select populations also yielded moderate agreement. There is marked variation in methods of eliciting and reporting congruence, even among studies focused on congruence as an outcome. Cross-study comparison would be enhanced by the use of similar questions to elicit preference, tables of preferred versus actual places of death, and kappa statistics of agreement.

  20. Measuring P availability in soils fertilized with water-soluble P fertilizers using 32P methodologies

    International Nuclear Information System (INIS)

    McLaughlin, M.J.

    2002-01-01

    Isotope exchange kinetics was used in conjunction with standard procedures for assessing soil P status in soils fertilized with soluble phosphatic fertilizers. Soil samples were collected before fertilizer application in year 1 (one) from 23 of the 30 sites of the National Reactive Phosphate Rock project. Soil phosphorus test values were plotted against indices of pasture response to applied fertilizer, to assess the effectiveness of the various soil tests to predict site responsiveness to applied fertilizer. Isotopically exchangeable P was only weakly related to other measures of available P, with resin P having the best relationship with E values. In some samples, very large values for isotopically exchangeable P (E values) were determined in relation to P extractable by all reagents. Examination of the data however, revealed that all the samples with large E values in relation to extractable P had very low equilibrium concentrations of solution P and high buffering capacities. The best soil test, Bray 1, could account for only 50% of the variation in plant responsiveness to applied fertilizer, with Olsen and Resin tests slightly worse at 41% and the isotopic procedure at 39%. (author)

  1. A methodology to measure cervical vertebral bone maturation in a sample from low-income children.

    Science.gov (United States)

    Aguiar, Luciana Barreto Vieira; Caldas, Maria de Paula; Haiter Neto, Francisco; Ambrosano, Glaucia Maria Bovi

    2013-01-01

    This study evaluated the applicability of the regression method for determining vertebral age developed by Caldas et al. (2007) by testing this method in children from low-income families of the rural zone. The sample comprised cephalometric and hand-wrist radiographs of 76 boys and 64 girls aged 7.0 to 14.9 years living in a medium-sized city in the desert region of the northeastern region of Brazil, with an HDI of 0.678. C3 and C4 vertebrae were traced and measured on cephalometric radiographs to estimate the bone age. The average age, average hand-wrist age and average error estimated for girls and boys were, respectively, 10.62 and 10.44 years, 11.28 and 10.57 years, and 1.42 and 1.18 years. Based on these results, the formula proposed by Caldas et al. (2007) was not applicable to the studied population, and new multiple regression models were developed to obtain the children's vertebral bone age accurately.

  2. Review of single transient oscillographic recorders with gigahertz bandwidth

    International Nuclear Information System (INIS)

    Campbell, D.E.

    1982-01-01

    In laser driven inertial confinement fusion research, at Livermore, we are diagnosing many phenomena that occur in a time frame that exceeds the capabilities of even the most advanced, present day oscillographic recording instruments. Many of the by-products of the interaction between the laser beam and fuel pellet are monitored to determine the specifics of the fusion process. By the use of appropriate detectors, we convert the information contained in the radiated by-products to electrical signals which are recorded on high bandwidth oscillographic recorders. Our present range of recording capabilities for one x-ray diagnostic measurement in use at Livermore is shown. A commonly used configuration consists of an XRD-31 x-ray detector connected to a direct access Tektronix R7912 transient digitizer using 1/2 in. diameter air dielectric coaxial cable. This configuration gives a system fwhm of approximately 335 ps. Our premier configuration, on the other hand, consists of an improved response detector and a French Thomson-CSF TSN-660 oscilloscope with a shorter length of coaxial cable (typically 20 feet). The system fwhm in this case is less than 120 ps which is our fastest oscillographic recording system at the present time

  3. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study.

    Science.gov (United States)

    Mokkink, Lidwine B; Terwee, Caroline B; Patrick, Donald L; Alonso, Jordi; Stratford, Paul W; Knol, Dirk L; Bouter, Lex M; de Vet, Henrica C W

    2010-05-01

    Aim of the COSMIN study (COnsensus-based Standards for the selection of health status Measurement INstruments) was to develop a consensus-based checklist to evaluate the methodological quality of studies on measurement properties. We present the COSMIN checklist and the agreement of the panel on the items of the checklist. A four-round Delphi study was performed with international experts (psychologists, epidemiologists, statisticians and clinicians). Of the 91 invited experts, 57 agreed to participate (63%). Panel members were asked to rate their (dis)agreement with each proposal on a five-point scale. Consensus was considered to be reached when at least 67% of the panel members indicated 'agree' or 'strongly agree'. Consensus was reached on the inclusion of the following measurement properties: internal consistency, reliability, measurement error, content validity (including face validity), construct validity (including structural validity, hypotheses testing and cross-cultural validity), criterion validity, responsiveness, and interpretability. The latter was not considered a measurement property. The panel also reached consensus on how these properties should be assessed. The resulting COSMIN checklist could be useful when selecting a measurement instrument, peer-reviewing a manuscript, designing or reporting a study on measurement properties, or for educational purposes.

  4. The Ocean Colour Climate Change Initiative: I. A Methodology for Assessing Atmospheric Correction Processors Based on In-Situ Measurements

    Science.gov (United States)

    Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic; hide

    2015-01-01

    The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.

  5. Investigation of Radiation Protection Methodologies for Radiation Therapy Shielding Using Monte Carlo Simulation and Measurement

    Science.gov (United States)

    Tanny, Sean

    The advent of high-energy linear accelerators for dedicated medical use in the 1950's by Henry Kaplan and the Stanford University physics department began a revolution in radiation oncology. Today, linear accelerators are the standard of care for modern radiation therapy and can generate high-energy beams that can produce tens of Gy per minute at isocenter. This creates a need for a large amount of shielding material to properly protect members of the public and hospital staff. Standardized vault designs and guidance on shielding properties of various materials are provided by the National Council on Radiation Protection (NCRP) Report 151. However, physicists are seeking ways to minimize the footprint and volume of shielding material needed which leads to the use of non-standard vault configurations and less-studied materials, such as high-density concrete. The University of Toledo Dana Cancer Center has utilized both of these methods to minimize the cost and spatial footprint of the requisite radiation shielding. To ensure a safe work environment, computer simulations were performed to verify the attenuation properties and shielding workloads produced by a variety of situations where standard recommendations and guidance documents were insufficient. This project studies two areas of concern that are not addressed by NCRP 151, the radiation shielding workload for the vault door with a non-standard design, and the attenuation properties of high-density concrete for both photon and neutron radiation. Simulations have been performed using a Monte-Carlo code produced by the Los Alamos National Lab (LANL), Monte Carlo Neutrons, Photons 5 (MCNP5). Measurements have been performed using a shielding test port designed into the maze of the Varian Edge treatment vault.

  6. Biological Nitrogen Fixation Efficiency in Brazilian Common Bean Genotypes as Measured by {sup 15}N Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Franzini, V. I.; Mendes, F. L. [Brazilian Agricultural Research Corporation, EMBRAPA-Amazonia Oriental, Belem, PA (Brazil); Muraoka, T.; Trevisam, A. R. [Center for Nuclear Energy in Agriculture, University of Sao Paulo, Piracicaba, SP (Brazil); Adu-Gyamfi, J. J. [Soil and Water Management and Crop Nutrition Laboratory, International Atomic Energy Agency, Seibersdorf (Austria)

    2013-11-15

    Common bean (Phaseolus vulgaris L.) represents the main source of protein for the Brazilian and other Latin-American populations. Unlike soybean, which is very efficient in fixing atmospheric N{sub 2} symbiotically, common bean does not dispense with the need for N fertilizer application, as the biologically fixed N (BNF) seems incapable to supplement the total N required by the crop. A experiment under controlled conditions was conducted in Piracicaba, Brazil, to assess N{sub 2} fixation of 25 genotypes of common bean (Phaseolus vulgaris L.). BNF was measured by {sup 15}N isotope dilution using a non-N{sub 2} fixing bean genotype as a reference crop. The common bean genotypes were grown in low (2.2 mg N kg{sup -1} soil) or high N content soil (200 mg N kg{sup -1} soil), through N fertilizer application, as urea-{sup 15}N (31.20 and 1.4 atom % {sup 15}N, respectively). The bean seeds were inoculated with Rhizobium tropici CIAT 899 strain and the plants were harvested at grain maturity stage. The contribution of BNF was on average 75% of total plant N content, and there were differences in N fixing capacity among the bean genotypes. The most efficient genotypes were Horizonte, Roxo 90, Grafite, Apore and Vereda, when grown in high N soil. None of the genotypes grown in low N soil was efficient in producing grains compared to those grown in high N soil, and therefore the BNF was not able to supply the total N demand of the bean crop. (author)

  7. Symbiotic dinitrogen fixation measurement in vetch-barley mixed swards using 15 N methodology

    International Nuclear Information System (INIS)

    Kurdali, F.; Sharabi, N.E.

    1995-01-01

    Field experiment on vetch and barley grown in monoculture and in mixed culture (3:1) under rain-fed conditions was conducted in 1991-1992 and 1992-1993 growing season. Three harvests were effectuated on one treatment throughout the growing season. While, other plots were harvested once at physiological maturity stage. Our results showed the importance of mixed cropping system of vetch and barley grown under rain fed conditions in terms of dry matter production, total nitrogen content and land use efficiency expressed as land equivalent ration (L.E.R). This advantage is clear in the plants harvested once at the end of the season. Therefore, it is important to grow legumes and cereals under rain fed conditions and to be left until late stage of growth and fed by animals directly. On the other hand, only two harvests could be done in the season with no additional harvests because this may weaken the plant growth, and as a result of the last approach we will obtained poor production due to unpredicated an appropriate rain fall after the second harvest (April). Nitrogen fixation efficiency in vetch measured by sup 1 sup 5 N isotop dilution method varied with the number of harvests and the procedure adopted in culture. Comparing the results of %Ndfa of vetch between monoculture and mixed culture showed that the values in most cases were higher in mixed culture. The competition between vetch and barley in the mixed stand for soil N-uptake made the barley supplements its N requirements from soil. The poor competitiveness of vetch capability for soil N-uptake enhanced it to fix more nitrogen. On the other hand, N residual after harvest was higher in the mixed treatment than the others. Positive and high final nitrogen balance were observed with the inclusion of vetch in the mixture. We excluded, under the current experimental conditions, the possibility of N-transfer from vetch to barley due to the insignificant differences in the value of sup 1 sup 5 N atom excess for

  8. Symbiotic dinitrogen fixation measurement in vetch-barley mixed swards using {sup 15} N methodology

    Energy Technology Data Exchange (ETDEWEB)

    Kurdali, F; Sharabi, N E [Atomic Energy Commission, Damascus (Syrian Arab Republic). Dept. of Radiation Agriculture

    1995-01-01

    Field experiment on vetch and barley grown in monoculture and in mixed culture (3:1) under rain-fed conditions was conducted in 1991-1992 and 1992-1993 growing season. Three harvests were effectuated on one treatment throughout the growing season. Our results showed the importance of mixed cropping system of vetch and barley grown under rain fed conditions in terms of dry matter production, total nitrogen content and land use efficiency expressed as land equivalent ration (L.E.R). This advantage is clear in the plants harvested once at the end of the season. Therefore, it is important to grow legumes and cereals under rain fed conditions and to be left until late stage of growth and fed by animals directly. On the other hand, only two harvests could be done in the season with no additional harvests because this may weaken the plant growth, and as a result of the last approach we obtained poor production due to unpredicated an appropriate rain fall after the second harvest (April). Nitrogen fixation efficiency in vetch measured by {sup 1 5} N isotope dilution method varied with the number of harvests and the procedure adopted in culture. Comparing the results of %Ndfa of vetch between monoculture and mixed culture showed that the values in most cases were higher in mixed culture. The competition between vetch and barley in the mixed stand for soil N-uptake made the barley supplements its N requirements from soil. The poor competitiveness of vetch capability for soil N-uptake enhanced it to fix more nitrogen. On the other hand, N residual after harvest was higher in the mixed treatment than the others. Positive and high final nitrogen balance were observed with the inclusion of vetch in the mixture. We excluded, under the current experimental conditions, the possibility of N-transfer from vetch to barley due to the insignificant differences in the value of {sup 1 5} N atom excess for barley between the two types of farming. 35 refs., 2 figs., 15 tabs.

  9. Exhaled nitric oxide measurements in the first 2 years of life: methodological issues, clinical and epidemiological applications

    Directory of Open Access Journals (Sweden)

    de Benedictis Fernando M

    2009-07-01

    Full Text Available Abstract Fractional exhaled nitric oxide (FeNO is a useful tool to diagnose and monitor eosinophilic bronchial inflammation in asthmatic children and adults. In children younger than 2 years of age FeNO has been successfully measured both with the tidal breathing and with the single breath techniques. However, there are a number of methodological issues that need to be addressed in order to increase the reproducibility of the FeNO measurements within and between infants. Indeed, a standardized method to measure FeNO in the first 2 years of life would be extremely useful in order to meaningfully interpret FeNO values in this age group. Several factors related to the measurement conditions have been found to influence FeNO, such as expiratory flow, ambient NO and nasal contamination. Furthermore, the exposure to pre- and postnatal risk factors for respiratory morbidity has been shown to influence FeNO values. Therefore, these factors should always be assessed and their association with FeNO values in the specific study population should be evaluated and, eventually, controlled for. There is evidence consistently suggesting that FeNO is increased in infants with family history of atopy/atopic diseases and in infants with recurrent wheezing. These findings could support the hypothesis that eosinophilic bronchial inflammation is present at an early stage in those infants at increased risk of developing persistent respiratory symptoms and asthma. Furthermore, it has been shown that FeNO measurements could represent a useful tool to assess bronchial inflammation in other airways diseases, such as primary ciliary dyskinesia, bronchopulmonary dysplasia and cystic fibrosis. Further studies are needed in order to improve the reproducibility of the measurements, and large prospective studies are warranted in order to evaluate whether FeNO values measured in the first years of life can predict the future development of asthma or other respiratory diseases.

  10. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications.

    Science.gov (United States)

    Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth

    2014-05-01

    In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method

  11. A methodology for supporting decisions on the establishment of protective measures after severe nuclear accidents. Final report

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Kollas, J.G.

    1994-06-01

    Full text: The objective of this report is to demonstrate the use of a methodology supporting decisions on protective measures following severe nuclear accidents. A multicriteria decision analysis approach is recommended where value tradeoffs are postponed until the very last stage of the decision process. Use of efficient frontiers is made to exclude all technically inferior solutions and present the decision maker with all non-dominated solutions. A choice among these solutions implies a value trade-off among the multiple criteria. An interactive computer package has been developed where the decision maker can choose a point on the efficient frontier in the consequence space and immediately see the alternative in the decision space resulting in the chosen consequences. The methodology is demonstrated through an application on the choice among possible protective measures in contaminated areas of the former USSR after the Chernobyl accident. Two distinct cases are considered: First a decision is to be made only on the basis of the level of soil contamination with Cs-137 and the total cost of the chosen protective policy; Next the decision is based on the geographic dimension of the contamination and the total cost. Three alternative countermeasure actions are considered for population segments living on soil contaminated at a certain level or in a specific geographic region: (a) relocation of the population; (b) improvement of the living conditions; and, (c) no countermeasures at all. This is the final deliverable of the CEC-CIS Joint Study Project 2, Task 5: Decision-Aiding-System for Establishing Intervention Levels, performed under Contracts COSU-CT91-0007 and COSU-CT92-0021 with the Commission of European Communities through CEPN. (author)

  12. Methodologic considerations in the measurement of glycemic index: glycemic response to rye bread, oatmeal porridge, and mashed potato.

    Science.gov (United States)

    Hätönen, Katja A; Similä, Minna E; Virtamo, Jarmo R; Eriksson, Johan G; Hannila, Marja-Leena; Sinkko, Harri K; Sundvall, Jouko E; Mykkänen, Hannu M; Valsta, Liisa M

    2006-11-01

    Methodologic choices affect measures of the glycemic index (GI). The effects on GI values of blood sampling site, reference food type, and the number of repeat tests have been insufficiently determined. The objective was to study the effect of methodologic choices on GI values. Comparisons were made between venous and capillary blood sampling and between glucose and white bread as the reference food. The number of tests needed for the reference food was assessed. Rye bread, oatmeal porridge, and instant mashed potato were used as the test foods. Twelve healthy volunteers were served each test food once and both reference foods 3 times at 1-wk intervals in a random order after they had fasted overnight. Capillary and venous blood samples were drawn at intervals for 3 h after each study meal. GIs and their CVs based on capillary samples were lower than those based on venous samples. Two tests of glucose solution as the reference provided stable capillary GIs for the test foods. The capillary GIs did not differ significantly when white bread was used as the reference 1, 2, or 3 times, but the variation was lower when tests were performed 2 and 3 times. Capillary GIs with white bread as the reference were 1.3 times as high as those with glucose as the reference. The capillary GIs of rye bread, oatmeal porridge, and mashed potato were 77, 74, and 80, respectively, with glucose as the reference. Capillary blood sampling should be used in the measurement of GI, and reference tests with glucose or white bread should be performed at least twice.

  13. Single-photon Coulomb explosion of methanol using broad bandwidth ultrafast EUV pulses.

    Science.gov (United States)

    Luzon, Itamar; Jagtap, Krishna; Livshits, Ester; Lioubashevski, Oleg; Baer, Roi; Strasser, Daniel

    2017-05-31

    Single-photon Coulomb explosion of methanol is instigated using the broad bandwidth pulse achieved through high-order harmonics generation. Using 3D coincidence fragment imaging of one molecule at a time, the kinetic energy release (KER) and angular distributions of the products are measured in different Coulomb explosion (CE) channels. Two-body CE channels breaking either the C-O or the C-H bonds are described as well as a proton migration channel forming H 2 O + , which is shown to exhibit higher KER. The results are compared to intense-field Coulomb explosion measurements in the literature. The interpretation of broad bandwidth single-photon CE data is discussed and supported by ab initio calculations of the predominant C-O bond breaking CE channel. We discuss the importance of these findings for achieving time resolved imaging of ultrafast dynamics.

  14. Open-Loop Wide-Bandwidth Phase Modulation Techniques

    Directory of Open Access Journals (Sweden)

    Nitin Nidhi

    2011-01-01

    Full Text Available The ever-increasing growth in the bandwidth of wireless communication channels requires the transmitter to be wide-bandwidth and power-efficient. Polar and outphasing transmitter topologies are two promising candidates for such applications, in future. Both these architectures require a wide-bandwidth phase modulator. Open-loop phase modulation presents a viable solution for achieving wide-bandwidth operation. An overview of prior art and recent approaches for phase modulation is presented in this paper. Phase quantization noise cancellation was recently introduced to lower the out-of-band noise in a digital phase modulator. A detailed analysis on the impact of timing and quantization of the cancellation signal is presented. Noise generated by the transmitter in the receive band frequency poses another challenge for wide-bandwidth transmitter design. Addition of a noise transfer function notch, in a digital phase modulator, to reduce the noise in the receive band during phase modulation is described in this paper.

  15. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Airborne IPDA-Lidar Measurements: Methodology and Experimental Results

    Science.gov (United States)

    Ehret, G.; Amediek, A.; Wirth, M.; Fix, A.; Kiemle, C.; Quatrevalet, M.

    2016-12-01

    We report on a new method and on the first demonstration to quantify emission rates from strong greenhouse gas (GHG) point sources using airborne Integrated Path Differential Absorption (IPDA) Lidar measurements. In order to build trust in the self-reported emission rates by countries, verification against independent monitoring systems is a prerequisite to check the reported budget. A significant fraction of the total anthropogenic emission of CO2 and CH4 originates from localized strong point sources of large energy production sites or landfills. Both are not monitored with sufficiently accuracy by the current observation system. There is a debate whether airborne remote sensing could fill in the gap to infer those emission rates from budgeting or from Gaussian plume inversion approaches, whereby measurements of the GHG column abundance beneath the aircraft can be used to constrain inverse models. In contrast to passive sensors, the use of an active instrument like CHARM-F for such emission verification measurements is new. CHARM-F is a new airborne IPDA-Lidar devised for the German research aircraft HALO for the simultaneous measurement of the column-integrated dry-air mixing ratio of CO2 and CH4 commonly denoted as XCO2 und XCH4, respectively. It has successfully been tested in a serious of flights over Central Europe to assess its performance under various reflectivity conditions and in a strongly varying topography like the Alps. The analysis of a methane plume measured in crosswind direction of a coal mine ventilation shaft revealed an instantaneous emission rate of 9.9 ± 1.7 kt CH4 yr-1. We discuss the methodology of our point source estimation approach and give an outlook on the CoMet field experiment scheduled in 2017 for the measurement of anthropogenic and natural GHG emissions by a combination of active and passive remote sensing instruments on research aircraft.

  16. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    Science.gov (United States)

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  17. Measurement of environmental impacts of telework adoption amidst change in complex organizations. AT and T survey methodology and results

    Energy Technology Data Exchange (ETDEWEB)

    Atkyns, Robert; Blazek, Michele; Roitz, Joseph [AT and T, 179 Bothin Road, 94930 Fairfax, CA (United States)

    2002-10-01

    Telecommuting practices and their environmental and organizational performance impacts have stimulated research across academic disciplines. Although telecommuting trends and impact projections are reported, few true longitudinal studies involving large organizations have been conducted. Published studies typically lack the research design elements to control a major confounding variable: rapid and widespread organizational change. Yet social science 'Best Practices' and market research industry quality control procedures exist that can help manage organizational change effects and other common sources of measurement error. In 1992, AT and T established a formal, corporate-wide telecommuting policy. A research and statistical modeling initiative was implemented to measure how flexible work arrangements reduce automotive emissions. Annual employee surveys were begun in 1994. As telecommuting benefits have been increasingly recognized within AT and T, the essential construct has been redefined as 'telework.' The survey's scope has expanded to address broader organization issues and provide guidance to multiple internal constituencies. This paper focuses upon the procedures used to reliably measure the adoption of telework practices and model their environmental impact, and contrasts those procedures with other, less reliable methodologies.

  18. High Bandwidth, Fine Resolution Deformable Mirror Design.

    Science.gov (United States)

    1980-03-01

    Low Temperature Solders 68 B.6 Influence Function Parameters 68 APPENDIX C 19 Capacitance Measurement 69 ACCESSION for NTIS white Sectloo ODC Buff...Multilayer actuator: Dilatation versus applied electric field 10 Figure 3 - Multilayer actuator: Influence function 11 Figure 4 - Honeycomb device...bimorph 20 Figure 8 - Bimorph device: Influence function of a bimorph device which has a glass plate 0.20 cm thick 24 Figure 9 - Bimorph device

  19. Moiré volume Bragg grating filter with tunable bandwidth.

    Science.gov (United States)

    Mokhov, Sergiy; Ott, Daniel; Divliansky, Ivan; Zeldovich, Boris; Glebov, Leonid

    2014-08-25

    We propose a monolithic large-aperture narrowband optical filter based on a moiré volume Bragg grating formed by two sequentially recorded gratings with slightly different resonant wavelengths. Such recording creates a spatial modulation of refractive index with a slowly varying sinusoidal envelope. By cutting a specimen at a small angle, to a thickness of one-period of this envelope, the longitudinal envelope profile will shift from a sine profile to a cosine profile across the face of the device. The transmission peak of the filter has a tunable bandwidth while remaining at a fixed resonant wavelength by a transversal shift of incidence position. Analytical expressions for the tunable bandwidth of such a filter are calculated and experimental data from a filter operating at 1064 nm with bandwidth range 30-90 pm is demonstrated.

  20. Development of a bandwidth limiting neutron chopper for CSNS

    Science.gov (United States)

    Wang, P.; Yang, B.; Cai, W. L.

    2015-08-01

    Bandwidth limiting neutron choppers are indispensable key equipments for the time-of-flight neutron scattering spectrometers of China Spallation Neutron Source (CSNS). The main principle is to chop the neutron beam to limit the neutron wavelength bandwidth at the neutron detector. We have successfully developed a bandwidth limiting neutron chopper for CSNS in the CSNS advance research project II. The transmission rate of the neutron absorbing coating is less than 1×10-4 (for 1 angstrom neutron). The phase control accuracy is ±0.084° (±9.4 μs at 25 Hz). The dynamic balance grade is G1.0. Various experimental technical features have met the design requirements, and it also runs stably and reliably during the long-term tests.

  1. Development of a bandwidth limiting neutron chopper for CSNS

    International Nuclear Information System (INIS)

    Wang, P.; Yang, B.; Cai, W.L.

    2015-01-01

    Bandwidth limiting neutron choppers are indispensable key equipments for the time-of-flight neutron scattering spectrometers of China Spallation Neutron Source (CSNS). The main principle is to chop the neutron beam to limit the neutron wavelength bandwidth at the neutron detector. We have successfully developed a bandwidth limiting neutron chopper for CSNS in the CSNS advance research project II. The transmission rate of the neutron absorbing coating is less than 1×10 −4 (for 1 angstrom neutron). The phase control accuracy is ±0.084° (±9.4 μs at 25 Hz). The dynamic balance grade is G1.0. Various experimental technical features have met the design requirements, and it also runs stably and reliably during the long-term tests

  2. Path connectivity based spectral defragmentation in flexible bandwidth networks.

    Science.gov (United States)

    Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi

    2013-01-28

    Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.

  3. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  4. EMG-Torque Dynamics Change With Contraction Bandwidth.

    Science.gov (United States)

    Golkar, Mahsa A; Jalaleddini, Kian; Kearney, Robert E

    2018-04-01

    An accurate model for ElectroMyoGram (EMG)-torque dynamics has many uses. One of its applications which has gained high attention among researchers is its use, in estimating the muscle contraction level for the efficient control of prosthesis. In this paper, the dynamic relationship between the surface EMG and torque during isometric contractions at the human ankle was studied using system identification techniques. Subjects voluntarily modulated their ankle torque in dorsiflexion direction, by activating their tibialis anterior muscle, while tracking a pseudo-random binary sequence in a torque matching task. The effects of contraction bandwidth, described by torque spectrum, on EMG-torque dynamics were evaluated by varying the visual command switching time. Nonparametric impulse response functions (IRF) were estimated between the processed surface EMG and torque. It was demonstrated that: 1) at low contraction bandwidths, the identified IRFs had unphysiological anticipatory (i.e., non-causal) components, whose amplitude decreased as the contraction bandwidth increased. We hypothesized that this non-causal behavior arose, because the EMG input contained a component due to feedback from the output torque, i.e., it was recorded from within a closed-loop. Vision was not the feedback source since the non-causal behavior persisted when visual feedback was removed. Repeating the identification using a nonparametric closed-loop identification algorithm yielded causal IRFs at all bandwidths, supporting this hypothesis. 2) EMG-torque dynamics became faster and the bandwidth of system increased as contraction modulation rate increased. Thus, accurate prediction of torque from EMG signals must take into account the contraction bandwidth sensitivity of this system.

  5. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia.

    Science.gov (United States)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  6. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia

    Science.gov (United States)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  7. High-fidelity polarization storage in a gigahertz bandwidth quantum memory

    International Nuclear Information System (INIS)

    England, D G; Michelberger, P S; Champion, T F M; Reim, K F; Lee, K C; Sprague, M R; Jin, X-M; Langford, N K; Kolthammer, W S; Nunn, J; Walmsley, I A

    2012-01-01

    We demonstrate a dual-rail optical Raman memory inside a polarization interferometer; this enables us to store polarization-encoded information at GHz bandwidths in a room-temperature atomic ensemble. By performing full process tomography on the system, we measure up to 97 ± 1% process fidelity for the storage and retrieval process. At longer storage times, the process fidelity remains high, despite a loss of efficiency. The fidelity is 86 ± 4% for 1.5 μs storage time, which is 5000 times the pulse duration. Hence, high fidelity is combined with a large time-bandwidth product. This high performance, with an experimentally simple setup, demonstrates the suitability of the Raman memory for integration into large-scale quantum networks. (paper)

  8. GHz-bandwidth upconversion detector using a unidirectional ring cavity to reduce multilongitudinal mode pump effects

    DEFF Research Database (Denmark)

    Meng, Lichun; Høgstedt, Lasse; Tidemand-Lichtenberg, Peter

    2017-01-01

    We demonstrate efficient upconversion of modulated infrared (IR) signals over a wide bandwidth (up to frequencies in excess of 1 GHz) via cavity-enhanced sum-frequency generation (SFG) in a periodically poled LiNbO3. Intensity modulated IR signal is produced by combining beams from two 1547 nm...... narrow-linewidth lasers in a fiber coupler while tuning their wavelength difference down to 10 pm or less. The SFG crystal is placed inside an Nd:YVO4 ring cavity that provides 1064 nm circulating pump powers of up to 150 W in unidirectional operation. Measured Fabry-Perot spectrum at 1064 nm confirms...... the enhanced spectral stability from multiple to single longitudinal mode pumping condition. We describe analytically and demonstrate experimentally the deleterious effects of using a multimode pump to the high-bandwidth RF spectrum of the 630 nm SFG output. Offering enhanced sensitivity without the need...

  9. Multi-directional plasmonic surface-wave splitters with full bandwidth isolation

    International Nuclear Information System (INIS)

    Gao, Zhen; Gao, Fei; Zhang, Baile

    2016-01-01

    We present a multidirectional plasmonic surface-wave splitter with full bandwidth isolation experimentally based on coupled defect surface modes in a surface-wave photonic crystal. In contrast to conventional plasmonic surface-wave frequency splitters with polaritonic dispersion relations that overlap at low frequencies, this multidirectional plasmonic surface-wave splitter based on coupled defect surface modes can split different frequency bands into different waveguide branches without bandwidth overlap. Transmission spectra and near-field imaging measurements have been implemented in the microwave frequencies to verify the performance of the multidirectional plasmonic surface-wave splitter. This surface wave structure can be used as a plasmonic wavelength-division multiplexer that may find potential applications in the surface-wave integrated circuits from microwave to terahertz frequencies.

  10. Pickup design for high bandwidth bunch arrival-time monitors in free-electron lasers

    Energy Technology Data Exchange (ETDEWEB)

    Angelovski, Aleksandar; Penirschke, Andreas; Jakoby, Rolf [TU Darmstadt (Germany). Institut fuer Mikrowellentechnik und Photonik; Kuhl, Alexander; Schnepp, Sascha [TU Darmstadt (Germany). Graduate School of Computational Engineering; Bock, Marie Kristin; Bousonville, Michael; Schlarb, Holger [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Weiland, Thomas [TU Darmstadt (Germany). Institut fuer Theorie Elektromagnetischer Felder

    2012-07-01

    The increased demands for low bunch charge operation mode in the free-electron lasers (FELs) require an upgrade of the existing synchronization equipment. As a part of the laser-based synchronization system, the bunch arrival-time monitors (BAMs) should have a sub-10 femtosecond precision for high and low bunch charge operation. In order to fulfill the resolution demands for both modes of operation, the bandwidth of such a BAM should be increased up to a cutoff frequency of 40 GHz. In this talk, we present the design and the realization of high bandwidth cone-shaped pickup electrodes as a part of the BAM for the FEL in Hamburg (FLASH) and the European X-ray free-electron laser (European XFEL). The proposed pickup was simulated with CST STUDIO SUITE, and a non-hermetic model was built up for radio frequency (rf) measurements.

  11. Utilising UDT to push the bandwidth envelope

    Science.gov (United States)

    Garrett, B.; Davies, B.

    eScience applications, in particular High Energy Physics, often involve large amounts of data and/or computing and often require secure resource sharing across organizational boundaries, and are thus not easily handled by today's networking infrastructures. By utilising the switched lightpath connections provided by the UKLight network it has been possible to research the use of alternate protocols for data transport. While the HEP projects make use of a number of middleware solutions for data storage and transport, they all rely on GridFTP for WAN transport. The GridFTP protocol runs over TCP as the layer 3 protocol by default, however with the latest released of the Globus toolkit it is possible to utilise alternate protocols at the layer 3 level. One of the alternatives is a reliable version of UDP called UDT. This report presents the results of the tests measuring the performance of single-threaded file transfers using GridFTP running over both TCP and the UDT protocol.

  12. Organic and total mercury determination in sediments by cold vapor atomic absorption spectrometry: methodology validation and uncertainty measurements

    Directory of Open Access Journals (Sweden)

    Robson L. Franklin

    2012-01-01

    Full Text Available The purpose of the present study was to validate a method for organic Hg determination in sediment. The procedure for organic Hg was adapted from literature, where the organomercurial compounds were extracted with dichloromethane in acid medium and subsequent destruction of organic compounds by bromine chloride. Total Hg was performed according to 3051A USEPA methodology. Mercury quantification for both methodologies was then performed by CVAAS. Methodology validation was verified by analyzing certified reference materials for total Hg and methylmercury. The uncertainties for both methodologies were calculated. The quantification limit of 3.3 µg kg-1 was found for organic Hg by CVAAS.

  13. SiGe HBT cryogenic preamplification for higher bandwidth donor spin read-out

    Science.gov (United States)

    Curry, Matthew; Carr, Stephen; Ten-Eyck, Greg; Wendt, Joel; Pluym, Tammy; Lilly, Michael; Carroll, Malcolm

    2014-03-01

    Single-shot read-out of a donor spin can be performed using the response of a single-electron-transistor (SET). This technique can produce relatively large changes in current, on the order of 1 (nA), to distinguish between the spin states. Despite the relatively large signal, the read-out time resolution has been limited to approximately 100 (kHz) of bandwidth because of noise. Cryogenic pre-amplification has been shown to extend the response of certain detection circuits to shorter time resolution and thus higher bandwidth. We examine a SiGe HBT circuit configuration for cryogenic preamplification, which has potential advantages over commonly used HEMT configurations. Here we present 4 (K) measurements of a circuit consisting of a Silicon-SET inline with a Heterojunction-Bipolar-Transistor (HBT). We compare the measured bandwidth with and without the HBT inline and find that at higher frequencies the signal-to-noise-ratio (SNR) with the HBT inline exceeds the SNR without the HBT inline. This work was performed, in part, at the Center for Integrated Nanotechnologies, a U.S. DOE, Office of Basic Energy Sciences user facility. The work was supported by the Sandia National Laboratories Directed Research and Development Program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a Lockheed-Martin Company, for the U. S. Department of Energy under Contract No. DE-AC04-94AL85000.

  14. Using a model of the performance measures in Soft Systems Methodology (SSM) to take action: a case study in health care

    NARCIS (Netherlands)

    Kotiadis, K.; Tako, A.; Rouwette, E.A.J.A.; Vasilakis, C.; Brennan, J.; Gandhi, P.; Wegstapel, H.; Sagias, F.; Webb, P.

    2013-01-01

    This paper uses a case study of a multidisciplinary colorectal cancer team in health care to explain how a model of performance measures can lead to debate and action in Soft System Methodology (SSM). This study gives a greater emphasis and role to the performance measures than currently given in

  15. Iterative Available Bandwidth Estimation for Mobile Transport Networks

    DEFF Research Database (Denmark)

    Ubeda Castellanos, Carlos; López Villa, Dimas; Teyeb, Oumer Mohammed

    2007-01-01

    Available bandwidth estimation has lately been proposed to be used for end-to-end resource management in existing and emerging mobile communication systems, whose transport networks could end up being the bottleneck rather than the air interface. Algorithms for admission control, handover...

  16. Come together: African universities collaborate to improve bandwidth

    International Development Research Centre (IDRC) Digital Library (Canada)

    2011-02-02

    Feb 2, 2011 ... SEE ALSO... In Reports magazine: Viewpoint: Bandwidth can bring African universities up to speed. In Reports magazine: Brain Drain and Capacity Building in Africa · The AAU Web site. The Connectivity Africa Web site. The World Summit on the Information Society (WSIS) Web site. IDRC's ICT4D Web site ...

  17. Sensitivity-Bandwidth Limit in a Multimode Optoelectromechanical Transducer

    Science.gov (United States)

    Moaddel Haghighi, I.; Malossi, N.; Natali, R.; Di Giuseppe, G.; Vitali, D.

    2018-03-01

    An optoelectromechanical system formed by a nanomembrane capacitively coupled to an L C resonator and to an optical interferometer has recently been employed for the highly sensitive optical readout of rf signals [T. Bagci et al., Nature (London) 507, 81 (2013), 10.1038/nature13029]. We propose and experimentally demonstrate how the bandwidth of such a transducer can be increased by controlling the interference between two electromechanical interaction pathways of a two-mode mechanical system. With a proof-of-principle device operating at room temperature, we achieve a sensitivity of 300 nV /√{Hz } over a bandwidth of 15 kHz in the presence of radio-frequency noise, and an optimal shot-noise-limited sensitivity of 10 nV /√{Hz } over a bandwidth of 5 kHz. We discuss strategies for improving the performance of the device, showing that, for the same given sensitivity, a mechanical multimode transducer can achieve a bandwidth significantly larger than that for a single-mode one.

  18. Fluid limits for bandwidth-sharing networks with rate constraints

    NARCIS (Netherlands)

    M. Frolkova (Masha); J. Reed (Josh); A.P. Zwart (Bert)

    2013-01-01

    htmlabstractBandwidth-sharing networks as introduced by Massouli\\'e~\\& Roberts (1998) model the dynamic interaction among an evolving population of elastic flows competing for several links. With policies based on optimization procedures, such models are of interest both from a~Queueing Theory and

  19. Estimating auditory filter bandwidth using distortion product otoacoustic emissions

    DEFF Research Database (Denmark)

    Hauen, Sigurd van; Rukjær, Andreas Harbo; Ordoñez Pizarro, Rodrigo Eduardo

    2017-01-01

    The basic frequency selectivity in the listener’s hearing is often characterized by auditory filters. These filters are determined through listening tests, which determine the masking threshold as a function of frequency of the tone and the bandwidth of the masking sound. The auditory filters hav...

  20. Frequency Selective Surfaces for extended Bandwidth backing reflector functions

    NARCIS (Netherlands)

    Pasian, M.; Neto, A.; Monni, S.; Ettorre, M.; Gerini, G.

    2008-01-01

    This paper deals with the use of Frequency Selective Surfaces (FSS) to increase the Efficiency × Bandwidth product in Ultra-Wide Band (UWB) antenna arrays whose efficiency is limited by the front-to-back ratio. If the backing reflector is realized in one metal plane solution its location will be

  1. A Practical Approach For Excess Bandwidth Distribution for EPONs

    KAUST Repository

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  2. Come together: African universities collaborate to improve bandwidth

    International Development Research Centre (IDRC) Digital Library (Canada)

    2011-02-02

    Feb 2, 2011 ... However, a stumbling block to realizing this vision arose: the cost of access. As Bob Hawkins, a senior education specialist at the WBI, points out, “the average African university pays 50 times more than the amount a North American university pays for Internet access.” Moreover, the bandwidth available to ...

  3. BMCloud: Minimizing Repair Bandwidth and Maintenance Cost in Cloud Storage

    Directory of Open Access Journals (Sweden)

    Chao Yin

    2013-01-01

    Full Text Available To protect data in cloud storage, fault tolerance and efficient recovery become very important. Recent studies have developed numerous solutions based on erasure code techniques to solve this problem using functional repairs. However, there are two limitations to address. The first one is consistency since the Encoding Matrix (EM is different among clouds. The other one is repairing bandwidth, which is a concern for most of us. We addressed these two problems from both theoretical and practical perspectives. We developed BMCloud, a new low repair bandwidth, low maintenance cost cloud storage system, which aims to reduce repair bandwidth and maintenance cost. The system employs both functional repair and exact repair while it inherits advantages from the both. We propose the JUDGE_STYLE algorithm, which can judge whether the system should adopt exact repair or functional repair. We implemented a networked storage system prototype and demonstrated our findings. Compared with existing solutions, BMCloud can be used in engineering to save repair bandwidth and degrade maintenance significantly.

  4. Efficient Bandwidth Management for Ethernet Passive Optical Networks

    KAUST Repository

    Elrasad, Amr Elsayed M.

    2016-05-15

    The increasing bandwidth demands in access networks motivates network operators, networking devices manufacturers, and standardization institutions to search for new approaches for access networks. These approaches should support higher bandwidth, longer distance between end user and network operator, and less energy consumption. Ethernet Passive Optical Network (EPON) is a favorable choice for broadband access networks. EPONs support transmission rates up to 10 Gbps. EPONs also support distance between end users and central office up to 20 Km. Moreover, optical networks have the least energy consumption among all types of networks. In this dissertation, we focus on reducing delay and saving energy in EPONs. Reducing delay is essential for delay-sensitive traffic, while minimizing energy consumption is an environmental necessity and also reduces the network operating costs. We identify five challenges, namely excess bandwidth allocation, frame delineation, congestion resolution, large round trip time delay in long-reach EPONs (LR-EPONs), and energy saving. We provide a Dynamic Bandwidth Allocation (DBA) approach for each challenge. We also propose a novel scheme that combines the features of the proposed approaches in one highly performing scheme. Our approach is to design novel DBA protocols that can further reduce the delay and be simultaneously simple and fair. We also present a dynamic bandwidth allocation scheme for Green EPONs taking into consideration maximizing energy saving under target delay constraints. Regarding excess bandwidth allocation, we develop an effective DBA scheme called Delayed Excess Scheduling (DES). DES achieves significant delay and jitter reduction and is more suitable for industrial deployment due to its simplicity. Utilizing DES in hybrid TDM/WDM EPONs (TWDM-EPONs) is also investigated. We also study eliminating the wasted bandwidth due to frame delineation. We develop an interactive DBA scheme, Efficient Grant Sizing Interleaved

  5. Characterization of the emissions impacts of hybrid excavators with a portable emissions measurement system (PEMS)-based methodology.

    Science.gov (United States)

    Cao, Tanfeng; Russell, Robert L; Durbin, Thomas D; Cocker, David R; Burnette, Andrew; Calavita, Joseph; Maldonado, Hector; Johnson, Kent C

    2018-04-13

    Hybrid engine technology is a potentially important strategy for reduction of tailpipe greenhouse gas (GHG) emissions and other pollutants that is now being implemented for off-road construction equipment. The goal of this study was to evaluate the emissions and fuel consumption impacts of electric-hybrid excavators using a Portable Emissions Measurement System (PEMS)-based methodology. In this study, three hybrid and four conventional excavators were studied for both real world activity patterns and tailpipe emissions. Activity data was obtained using engine control module (ECM) and global positioning system (GPS) logged data, coupled with interviews, historical records, and video. This activity data was used to develop a test cycle with seven modes representing different types of excavator work. Emissions data were collected over this test cycle using a PEMS. The results indicated the HB215 hybrid excavator provided a significant reduction in tailpipe carbon dioxide (CO 2 ) emissions (from -13 to -26%), but increased diesel particulate matter (PM) (+26 to +27%) when compared to a similar model conventional excavator over the same duty cycle. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health.

    Science.gov (United States)

    Aymerich, Marta; Carrion, Carme; Gallo, Pedro; Garcia, Maria; López-Bermejo, Abel; Quesada, Miquel; Ramos, Rafel

    2012-08-01

    Most ex-post evaluations of research funding programs are based on bibliometric methods and, although this approach has been widely used, it only examines one facet of the project's impact, that is, scientific productivity. More comprehensive models of payback assessment of research activities are designed for large-scale projects with extensive funding. The purpose of this study was to design and implement a methodology for the ex-post evaluation of small-scale projects that would take into account both the fulfillment of projects' stated objectives as well as other wider benefits to society as payback measures. We used a two-phase ex-post approach to appraise impact for 173 small-scale projects funded in 2007 and 2008 by a Spanish network center for research in epidemiology and public health. In the internal phase we used a questionnaire to query the principal investigator (PI) on the outcomes as well as actual and potential impact of each project; in the external phase we sent a second questionnaire to external reviewers with the aim of assessing (by peer-review) the performance of each individual project. Overall, 43% of the projects were rated as having completed their objectives "totally", and 40% "considerably". The research activities funded were reported by PIs as socially beneficial their greatest impact being on research capacity (50% of payback to society) and on knowledge translation (above 11%). The method proposed showed a good discriminating ability that makes it possible to measure, reliably, the extent to which a project's objectives were met as well as the degree to which the project contributed to enhance the group's scientific performance and of its social payback. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. A mechanism design approach to bandwidth allocation in tactical data networks

    Science.gov (United States)

    Mour, Ankur

    The defense sector is undergoing a phase of rapid technological advancement, in the pursuit of its goal of information superiority. This goal depends on a large network of complex interconnected systems - sensors, weapons, soldiers - linked through a maze of heterogeneous networks. The sheer scale and size of these networks prompt behaviors that go beyond conglomerations of systems or `system-of-systems'. The lack of a central locus and disjointed, competing interests among large clusters of systems makes this characteristic of an Ultra Large Scale (ULS) system. These traits of ULS systems challenge and undermine the fundamental assumptions of today's software and system engineering approaches. In the absence of a centralized controller it is likely that system users may behave opportunistically to meet their local mission requirements, rather than the objectives of the system as a whole. In these settings, methods and tools based on economics and game theory (like Mechanism Design) are likely to play an important role in achieving globally optimal behavior, when the participants behave selfishly. Against this background, this thesis explores the potential of using computational mechanisms to govern the behavior of ultra-large-scale systems and achieve an optimal allocation of constrained computational resources Our research focusses on improving the quality and accuracy of the common operating picture through the efficient allocation of bandwidth in tactical data networks among self-interested actors, who may resort to strategic behavior dictated by self-interest. This research problem presents the kind of challenges we anticipate when we have to deal with ULS systems and, by addressing this problem, we hope to develop a methodology which will be applicable for ULS system of the future. We build upon the previous works which investigate the application of auction-based mechanism design to dynamic, performance-critical and resource-constrained systems of interest

  8. Pseudo-differential CMOS analog front-end circuit for wide-bandwidth optical probe current sensor

    Science.gov (United States)

    Uekura, Takaharu; Oyanagi, Kousuke; Sonehara, Makoto; Sato, Toshiro; Miyaji, Kousuke

    2018-04-01

    In this paper, we present a pseudo-differential analog front-end (AFE) circuit for a novel optical probe current sensor (OPCS) aimed for high-frequency power electronics. It employs a regulated cascode transimpedance amplifier (RGC-TIA) to achieve a high gain and a large bandwidth without using an extremely high performance operational amplifier. The AFE circuit is designed in a 0.18 µm standard CMOS technology achieving a high transimpedance gain of 120 dB Ω and high cut off frequency of 16 MHz. The measured slew rate is 70 V/µs and the input referred current noise is 1.02 pA/\\sqrt{\\text{Hz}} . The magnetic resolution and bandwidth of OPCS are estimated to be 1.29 mTrms and 16 MHz, respectively; the bandwidth is higher than that of the reported Hall effect current sensor.

  9. Application of Tryptophan Fluorescence Bandwidth-Maximum Plot in Analysis of Monoclonal Antibody Structure.

    Science.gov (United States)

    Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei

    2017-04-01

    Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.

  10. Bandwidths of micro-twisted-pair cables and fusion-spliced SIMM-GRIN fiber

    International Nuclear Information System (INIS)

    Gan, K.K.; Kagan, H.P.; Kass, R.D.; Smith, D.S.

    2007-01-01

    The SLHC is designed to increase the luminosity of the LHC by a factor of 10. In the present ATLAS pixel detector, electrical signals between the pixel modules and the optical modules (opto-boards) are transmitted in ∼1 m of micro-twisted-pair cables. The optical signals between the opto-boards and the off-detector optical modules are transmitted in fiber ribbons. Each fiber link consists of 8 m of rad-hard/low bandwidth SIMM fiber fusion spliced to 70 m of rad-tolerant/medium bandwidth GRIN fiber. We currently transmit optical signals at 80 Mb/s and expect to transmit signals at 1 Gb/s in the SLHC. For the SLHC optical link, we would like to take advantage of some of the design features of the present pixel optical links and the many years of R and D effort and production experience. If the present architecture can transmit signals at the higher speed required by the SLHC, the constraint of requiring no extra service space is automatically satisfied. We have measured the bandwidths of the transmission lines and our preliminary results indicate that the micro-twisted-pair cables can transmit signals up to ∼1 Gb/s and the fusion-spliced fiber ribbon can transmit signals up to ∼2 Gb/s

  11. A new metasurface reflective structure for simultaneous enhancement of antenna bandwidth and gain

    International Nuclear Information System (INIS)

    Habib Ullah, M; Islam, M T

    2014-01-01

    A new bi-layered metasurface reflective structure (MRS) on a high-permittivity, low-loss, ceramic-filled, bio-plastic, sandwich-structured, dielectric substrate is proposed for the simultaneous enhancement of the bandwidth and gain of a dual band patch antenna. By incorporating the MRS with a 4 mm air gap between the MRS and the antenna, the bandwidth and gain of the dual band patch antenna are significantly enhanced. The reflection coefficient (S11 < −10 dB) bandwidth of the proposed MRS-loaded antenna increased by 240% (178%), and the average peak gain improved by 595% (128%) compared to the antenna alone in the lower (upper) band. Incremental improvements of the magnitude and directional patterns have been observed from the measured radiation patterns at the three resonant frequencies of 0.9 GHz, 3.7 GHz and 4.5 GHz. The effects of different configurations of the radiating patch and the ground plane on the reflection coefficient have been analyzed. In addition, the voltage standing wave ratio and input impedance have also been validated using a Smith chart. (paper)

  12. A new metasurface reflective structure for simultaneous enhancement of antenna bandwidth and gain

    Science.gov (United States)

    Ullah, M. Habib; Islam, M. T.

    2014-08-01

    A new bi-layered metasurface reflective structure (MRS) on a high-permittivity, low-loss, ceramic-filled, bio-plastic, sandwich-structured, dielectric substrate is proposed for the simultaneous enhancement of the bandwidth and gain of a dual band patch antenna. By incorporating the MRS with a 4 mm air gap between the MRS and the antenna, the bandwidth and gain of the dual band patch antenna are significantly enhanced. The reflection coefficient (S11 < -10 dB) bandwidth of the proposed MRS-loaded antenna increased by 240% (178%), and the average peak gain improved by 595% (128%) compared to the antenna alone in the lower (upper) band. Incremental improvements of the magnitude and directional patterns have been observed from the measured radiation patterns at the three resonant frequencies of 0.9 GHz, 3.7 GHz and 4.5 GHz. The effects of different configurations of the radiating patch and the ground plane on the reflection coefficient have been analyzed. In addition, the voltage standing wave ratio and input impedance have also been validated using a Smith chart.

  13. Analytical optimization of active bandwidth and quality factor for TOCSY experiments in NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Coote, Paul, E-mail: paul-coote@hms.harvard.edu [Harvard Medical School (United States); Bermel, Wolfgang [Bruker BioSpin GmbH (Germany); Wagner, Gerhard; Arthanari, Haribabu, E-mail: hari@hms.harvard.edu [Harvard Medical School (United States)

    2016-09-15

    Active bandwidth and global quality factor are the two main metrics used to quantitatively compare the performance of TOCSY mixing sequences. Active bandwidth refers to the spectral region over which at least 50 % of the magnetization is transferred via a coupling. Global quality factor scores mixing sequences according to the worst-case transfer over a range of possible mixing times and chemical shifts. Both metrics reward high transfer efficiency away from the main diagonal of a two-dimensional spectrum. They can therefore be used to design mixing sequences that will function favorably in experiments. Here, we develop optimization methods tailored to these two metrics, including precise control of off-diagonal cross peak buildup rates. These methods produce square shaped transfer efficiency profiles, directly matching the desirable properties that the metrics are intended to measure. The optimization methods are analytical, rather than numerical. The two resultant shaped pulses have significantly higher active bandwidth and quality factor, respectively, than all other known sequences. They are therefore highly suitable for use in NMR spectroscopy. We include experimental verification of these improved waveforms on small molecule and protein samples.

  14. ICE-Based Custom Full-Mesh Network for the CHIME High Bandwidth Radio Astronomy Correlator

    Science.gov (United States)

    Bandura, K.; Cliche, J. F.; Dobbs, M. A.; Gilbert, A. J.; Ittah, D.; Mena Parra, J.; Smecher, G.

    2016-03-01

    New generation radio interferometers encode signals from thousands of antenna feeds across large bandwidth. Channelizing and correlating this data requires networking capabilities that can handle unprecedented data rates with reasonable cost. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) correlator processes 8-bits from N=2,048 digitizer inputs across 400MHz of bandwidth. Measured in N2× bandwidth, it is the largest radio correlator that is currently commissioning. Its digital back-end must exchange and reorganize the 6.6terabit/s produced by its 128 digitizing and channelizing nodes, and feed it to the 256 graphics processing unit (GPU) node spatial correlator in a way that each node obtains data from all digitizer inputs but across a small fraction of the bandwidth (i.e. ‘corner-turn’). In order to maximize performance and reliability of the corner-turn system while minimizing cost, a custom networking solution has been implemented. The system makes use of Field Programmable Gate Array (FPGA) transceivers to implement direct, passive copper, full-mesh, high speed serial connections between sixteen circuit boards in a crate, to exchange data between crates, and to offload the data to a cluster of 256 GPU nodes using standard 10Gbit/s Ethernet links. The GPU nodes complete the corner-turn by combining data from all crates and then computing visibilities. Eye diagrams and frame error counters confirm error-free operation of the corner-turn network in both the currently operating CHIME Pathfinder telescope (a prototype for the full CHIME telescope) and a representative fraction of the full CHIME hardware providing an end-to-end system validation. An analysis of an equivalent corner-turn system built with Ethernet switches instead of custom passive data links is provided.

  15. Towards Uniform Accelerometry Analysis: A Standardization Methodology to Minimize Measurement Bias Due to Systematic Accelerometer Wear-Time Variation

    Directory of Open Access Journals (Sweden)

    Tarun R. Katapally, Nazeem Muhajarine

    2014-06-01

    Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and

  16. A method for the possible species discrimination of juvenile gadoids by broad-bandwidth backscattering spectra vs. angle of incidence

    DEFF Research Database (Denmark)

    Lundgren, Bo; Nielsen, J. Rasmus

    2008-01-01

    , alignment of acoustic and optical-reference frames, and automatic position-fitting of fish models to manually marked fix-points on fish images. The software also performs Fourier spectrum analysis and pulse-shape analysis of broad-bandwidth echoes. Therefore, several measurement series on free...

  17. Optimisation of the measurement protocols of 129I and 129I/127I. Methodology establishment for the measurement in environmental matrices

    International Nuclear Information System (INIS)

    Frechou, C.

    2000-01-01

    129 I, is a natural long-lived isotope, with a half-life of 15,7 million years, also artificially produced in nuclear power plant. It is then released in the liquid and gaseous effluents of the nuclear fuel reprocessing plants. 129 I is integrated in all biological compartments at different activity levels, depending on their distance from the emission source and their ability to metabolize iodine. Performances of the different 129 I and 129 I/ 127 I measurement techniques available: Radiochemical Neutron Activation Analysis, Accelerator Mass Spectrometry, direct γ-X spectrometry and liquid scintillation were evaluated. Associated radiochemical preparation steps of the two first techniques were optimized and adapted to the characteristics of the major environmental matrices. In a first step, the radiochemical protocols were developed and validated. In a second step, intercomparison exercises have been lead on various environmental samples presenting different 129 I activity levels. They showed the good agreement between the results given by the three techniques on different environmental matrices with activities between 0,2 and 200 Bq.kg -1 dry weight. As a conclusion, a methodology for the measurement of 129 I and 129 I/ 127 I ratio in environmental samples is proposed. It includes a decisional diagram taking into account the characteristics of the matrices, the detection limits and the answer delay. A study on the losses of 129 I during the calcination of an algae was lead by direct γ-X spectrometry and application studies were made to measure 129 I levels in different biological compartments issued from various locations: 129 I activity interspecific variation in different species of seaweeds from the French channel coast under the relative influence of La Hague, 129 I levels in bovine thyroids from the Cotentin area and 129 I in vegetal samples collected around the nuclear reprocessing plant of Marcoule. (author)

  18. On the actual bandwidth of some dynamic fiber optic strain/temperature interrogators

    Science.gov (United States)

    Preizler, Rotem R.; Davidi, R.; Motil, Avi; Botsev, Yakov; Hahami, Meir; Tur, Moshe

    2017-04-01

    The measurement accuracy of dynamic fiber-optic sensing interrogators, which use frequency scanning to determine the value of the measured, err as either the event bandwidth approaches half the instrument sampling frequency or when the event dynamic range comes close to the instrument designed value. One main source of error is the common practice of assigning sampling at a non-uniform grid to a uniform one. Harmonics higher than -20 dB are observed for signal frequencies exceeding 25% of the sampling rate and/or for signal amplitudes higher than 15% of the instrument dynamic range. These findings have applications to fiber-Bragg-grating and Brillouin interrogators.

  19. BER-3.2 report: Methodology for justification and optimization of protective measures including a case study

    International Nuclear Information System (INIS)

    Hedemann Jensen, P.; Sinkko, K.; Walmod-Larsen, O.; Gjoerup, H.L.; Salo, A.

    1992-07-01

    This report is a part of the Nordic BER-3 project's work to propose and harmonize Nordic intervention levels for countermeasures in case of nuclear accidents. This report focuses on the methodology for justification and optimization of protective measures in case of a reactor accident situation with a large release of fission products to the environment. The down-wind situation is very complicated. The dose to the exposed society is almost unpredictable. The task of the radiation protection experts: To give advice to the decision makers on averted doses by the different actions at hand in the situation - is complicated. That of the decision makers is certainly more: On half of the society they represent, they must decide if they wish to follow the advices from their radiation protection experts or if they wish to add further arguments - economical or political (or personal) - into their considerations before their decisions are taken. Two analysis methods available for handling such situations: cost-benefit analysis and multi-attribute utility analysis are described in principle and are utilized in a case study: The impacts of a Chernobyl-like accident on the Swedish island of Gotland in the Baltic Sea are analyzed with regard to the acute consequences. The use of the intervention principles found in international guidance (IAEA 91, ICRP 91), which can be summarized as the principles of justification, optimization and avoidance of unacceptable doses, are described. How to handle more intangible factors of a psychological or political character is indicated. (au) (6 tabs., 3 ills., 17 refs.)

  20. Plasma density profiles and finite bandwidth effects on electron heating

    International Nuclear Information System (INIS)

    Spielman, R.B.; Mizuno, K.; DeGroot, J.S.; Bollen, W.M.; Woo, W.

    1980-01-01

    Intense, p-polarized microwaves are incident on an inhomogeneous plasma in a cylindrical waveguide. Microwaves are mainly absorbed by resonant absorption near the critical surface (where the plasma frequency, ω/sub pe/, equals the microwave frequency, ω/sub o/). The localized plasma waves strongly modify the plasma density. Step-plateau density profiles or a cavity are created depending on the plasma flow speed. Hot electron production is strongly affected by the microwave bandwidth. The hot electron temperature varies as T/sub H/ is proportional to (Δ ω/ω) -0 25 . As the hot electron temperature decreases with increasing driver bandwidth, the hot electron density increases. This increase is such that the heat flux into the overdense region (Q is proportional to eta/sub H/T/sub H/ 3 2 ) is nearly constant

  1. Optical interconnect technologies for high-bandwidth ICT systems

    Science.gov (United States)

    Chujo, Norio; Takai, Toshiaki; Mizushima, Akiko; Arimoto, Hideo; Matsuoka, Yasunobu; Yamashita, Hiroki; Matsushima, Naoki

    2016-03-01

    The bandwidth of information and communication technology (ICT) systems is increasing and is predicted to reach more than 10 Tb/s. However, an electrical interconnect cannot achieve such bandwidth because of its density limits. To solve this problem, we propose two types of high-density optical fiber wiring for backplanes and circuit boards such as interface boards and switch boards. One type uses routed ribbon fiber in a circuit board because it has the ability to be formed into complex shapes to avoid interfering with the LSI and electrical components on the board. The backplane is required to exhibit high density and flexibility, so the second type uses loose fiber. We developed a 9.6-Tb/s optical interconnect demonstration system using embedded optical modules, optical backplane, and optical connector in a network apparatus chassis. We achieved 25-Gb/s transmission between FPGAs via the optical backplane.

  2. Raman scheme for adjustable-bandwidth quantum memory

    International Nuclear Information System (INIS)

    Le Goueet, J.-L.; Berman, P. R.

    2009-01-01

    We propose a scenario of quantum memory for light based on Raman scattering. The storage medium is a vapor and the different spectral components of the input pulse are stored in different atomic velocity classes. One uses appropriate pulses to reverse the resulting Doppler phase shift and to regenerate the input pulse, without distortion, in the backward direction. The different stages of the protocol are detailed and the recovery efficiency is calculated in the semiclassical picture. Since the memory bandwidth is determined by the Raman transition Doppler width, it can be adjusted by changing the angle between the input pulse wave vector and the control beams. The optical depth also depends on the beam angle. As a consequence the available optical depth can be optimized depending on the needed bandwidth. The predicted recovery efficiency is close to 100% for large optical depth.

  3. The Bandwidths of a Matrix. A Survey of Algorithms

    Directory of Open Access Journals (Sweden)

    Mafteiu-Scai Liviu Octavian

    2014-12-01

    Full Text Available The bandwidth, average bandwidth, envelope, profile and antibandwidth of the matrices have been the subjects of study for at least 45 years. These problems have generated considerable interest over the years because of them practical relevance in areas like: solving the system of equations, finite element methods, circuit design, hypertext layout, chemical kinetics, numerical geophysics etc. In this paper a brief description of these problems are made in terms of their definitions, followed by a comparative study of them, using both approaches: matrix geometry and graph theory. Time evolution of the corresponding algorithms as well as a short description of them are made. The work also contains concrete real applications for which a large part of presented algorithms were developed.

  4. Gain-switched all-fiber laser with narrow bandwidth

    DEFF Research Database (Denmark)

    Larsen, Casper; Giesberts, M.; Nyga, S.

    2013-01-01

    pulse energy is 20 μJ in a duration of 135 ns at 7 kHz. The bandwidth increases for a higher pump pulse energy and repetition rate, and this sets the limit of the output pulse energy. A single power amplifier is added to raise the peak power to the kW-level and the pulse energy to 230 μJ while keeping......Gain-switching of a CW fiber laser is a simple and cost-effective approach to generate pulses using an all-fiber system. We report on the construction of a narrow bandwidth (below 0.1 nm) gain-switched fiber laser and optimize the pulse energy and pulse duration under this constraint. The extracted...

  5. Bandwidth-sharing in LHCONE, an analysis of the problem

    Science.gov (United States)

    Wildish, T.

    2015-12-01

    The LHC experiments have traditionally regarded the network as an unreliable resource, one which was expected to be a major source of errors and inefficiency at the time their original computing models were derived. Now, however, the network is seen as much more capable and reliable. Data are routinely transferred with high efficiency and low latency to wherever computing or storage resources are available to use or manage them. Although there was sufficient network bandwidth for the experiments’ needs during Run-1, they cannot rely on ever-increasing bandwidth as a solution to their data-transfer needs in the future. Sooner or later they need to consider the network as a finite resource that they interact with to manage their traffic, in much the same way as they manage their use of disk and CPU resources. There are several possible ways for the experiments to integrate management of the network in their software stacks, such as the use of virtual circuits with hard bandwidth guarantees or soft real-time flow-control, with somewhat less firm guarantees. Abstractly, these can all be considered as the users (the experiments, or groups of users within the experiment) expressing a request for a given bandwidth between two points for a given duration of time. The network fabric then grants some allocation to each user, dependent on the sum of all requests and the sum of available resources, and attempts to ensure the requirements are met (either deterministically or statistically). An unresolved question at this time is how to convert the users’ requests into an allocation. Simply put, how do we decide what fraction of a network's bandwidth to allocate to each user when the sum of requests exceeds the available bandwidth? The usual problems of any resourcescheduling system arise here, namely how to ensure the resource is used efficiently and fairly, while still satisfying the needs of the users. Simply fixing quotas on network paths for each user is likely to lead

  6. Passive Mobile Bandwidth Classification Using Short Lived TCP Connections

    OpenAIRE

    Michelinakis, Foivos; Kreitz, Gunnar; Petrocco, Riccardo; Zhang, Boxun; Widmer, Joerg

    2015-01-01

    Consumption of multimedia content is moving from a residential environment to mobile phones. Optimizing Quality of Experience—smooth, quick, and high quality playback—is more difficult in this setting, due to the highly dynamic nature of wireless links. A key requirement for achieving this goal is estimating the available bandwidth of mobile devices. Ideally, this should be done quickly and with low overhead. One challenge is that the majority of connections on mobiles are short-lived TCP con...

  7. Bandwidth Allocation Considering Priorities among Multimedia Components in Mobile Networks

    OpenAIRE

    Shigeki, Shiokawa; Shuji, Tasaka

    2001-01-01

    This paper proposes a bandwidth allocation scheme which improves degradation of communication quality due to handoffs in mobile multimedia networks. In general, a multimedia call consists of several component calls. For example, a video phone call consists of a voice call and a video call. In realistic environments, each component call included in one multimedia call may have different requirements for quality-of-service (QoS) from each other, and priorities among these component calls often ...

  8. Power Versus Bandwidth Efficiency in Wireless Communication: The Economic Perspective

    OpenAIRE

    Akhtman, Jos; Hanzo, Lajos

    2009-01-01

    We carry out a comprehensive analysis of a range of wireless network efficiency considerations. Firstly, we explore the properties and the implications of the power- versus bandwidth-efficiency criteria. Secondly, we perform a detailed top-down analysis of a typical commercial wireless network, which emphasizes the inherent differences between the aforementioned two efficiency metrics, while demonstrating that the appropriate choice of the network optimization criterion can have a profound ef...

  9. Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth

    Science.gov (United States)

    2017-10-01

    AFRL-SA-WP-TR-2017-0019 Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth Peter F...Wright-Patterson AFB, OH 45433-7913 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. STINFO COPY NOTICE AND...invention that may relate to them. Qualified requestors may obtain copies of this report from the Defense Technical Information Center (DTIC) (http

  10. High bandwidth second-harmonic generation in partially deuterated KDP

    International Nuclear Information System (INIS)

    Webb, M.S.; Eimerl, D.; Velsko, S.P.

    1992-01-01

    We have experimentally determined the spectrally noncritical phasematching behavior of Type I frequency doubling in KDP and its dependence on deuteration level in partially deuterated KDP. The first order wavelength sensitivity parameter∂Δk/∂γ for Type I doubling of 1.053 μm light vanishes for a KD*P crystal with a deuteration level between 10 and 14%. Very high bandwidth frequency doubling of Nd:glass lasers is possible with such a crystal

  11. BMCloud: Minimizing Repair Bandwidth and Maintenance Cost in Cloud Storage

    OpenAIRE

    Yin, Chao; Xie, Changsheng; Wan, Jiguang; Hung, Chih-Cheng; Liu, Jinjiang; Lan, Yihua

    2013-01-01

    To protect data in cloud storage, fault tolerance and efficient recovery become very important. Recent studies have developed numerous solutions based on erasure code techniques to solve this problem using functional repairs. However, there are two limitations to address. The first one is consistency since the Encoding Matrix (EM) is different among clouds. The other one is repairing bandwidth, which is a concern for most of us. We addressed these two problems from both theoretical and practi...

  12. Concurrent measurement of "real-world" stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology.

    Science.gov (United States)

    Kimhy, David; Delespaul, Philippe; Ahn, Hongshik; Cai, Shengnan; Shikhman, Marina; Lieberman, Jeffrey A; Malaspina, Dolores; Sloan, Richard P

    2010-11-01

    Psychosis has been repeatedly suggested to be affected by increases in stress and arousal. However, there is a dearth of evidence supporting the temporal link between stress, arousal, and psychosis during "real-world" functioning. This paucity of evidence may stem from limitations of current research methodologies. Our aim is to the test the feasibility and validity of a novel methodology designed to measure concurrent stress and arousal in individuals with psychosis during "real-world" daily functioning. Twenty patients with psychosis completed a 36-hour ambulatory assessment of stress and arousal. We used experience sampling method with palm computers to assess stress (10 times per day, 10 AM → 10 PM) along with concurrent ambulatory measurement of cardiac autonomic regulation using a Holter monitor. The clocks of the palm computer and Holter monitor were synchronized, allowing the temporal linking of the stress and arousal data. We used power spectral analysis to determine the parasympathetic contributions to autonomic regulation and sympathovagal balance during 5 minutes before and after each experience sample. Patients completed 79% of the experience samples (75% with a valid concurrent arousal data). Momentary increases in stress had inverse correlation with concurrent parasympathetic activity (ρ = -.27, P feasibility and validity of our methodology in individuals with psychosis. The methodology offers a novel way to study in high time resolution the concurrent, "real-world" interactions between stress, arousal, and psychosis. The authors discuss the methodology's potential applications and future research directions.

  13. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    Science.gov (United States)

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  14. Managing high-bandwidth real-time data storage

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, David D. [Los Alamos National Laboratory; Brandt, Scott A [Los Alamos National Laboratory; Bent, John M [Los Alamos National Laboratory; Chen, Hsing-Bung [Los Alamos National Laboratory

    2009-09-23

    There exist certain systems which generate real-time data at high bandwidth, but do not necessarily require the long-term retention of that data in normal conditions. In some cases, the data may not actually be useful, and in others, there may be too much data to permanently retain in long-term storage whether it is useful or not. However, certain portions of the data may be identified as being vitally important from time to time, and must therefore be retained for further analysis or permanent storage without interrupting the ongoing collection of new data. We have developed a system, Mahanaxar, intended to address this problem. It provides quality of service guarantees for incoming real-time data streams and simultaneous access to already-recorded data on a best-effort basis utilizing any spare bandwidth. It has built in mechanisms for reliability and indexing, can scale upwards to meet increasing bandwidth requirements, and handles both small and large data elements equally well. We will show that a prototype version of this system provides better performance than a flat file (traditional filesystem) based version, particularly with regard to quality of service guarantees and hard real-time requirements.

  15. Discrete- and finite-bandwidth-frequency distributions in nonlinear stability applications

    Science.gov (United States)

    Kuehl, Joseph J.

    2017-02-01

    A new "wave packet" formulation of the parabolized stability equations method is presented. This method accounts for the influence of finite-bandwidth-frequency distributions on nonlinear stability calculations. The methodology is motivated by convolution integrals and is found to appropriately represent nonlinear energy transfer between primary modes and harmonics, in particular nonlinear feedback, via a "nonlinear coupling coefficient." It is found that traditional discrete mode formulations overestimate nonlinear feedback by approximately 70%. This results in smaller maximum disturbance amplitudes than those observed experimentally. The new formulation corrects this overestimation, accounts for the generation of side lobes responsible for spectral broadening, and results in disturbance representation more consistent with the experiment than traditional formulations. A Mach 6 flared-cone example is presented.

  16. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    Science.gov (United States)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  17. The bounds on tracking performance utilising a laser-based linear and angular sensing and measurement methodology for micro/nano manipulation

    International Nuclear Information System (INIS)

    Clark, Leon; Shirinzadeh, Bijan; Tian, Yanling; Zhong, Yongmin

    2014-01-01

    This paper presents an analysis of the tracking performance of a planar three degrees of freedom (DOF) flexure-based mechanism for micro/nano manipulation, utilising a tracking methodology for the measurement of coupled linear and angular motions. The methodology permits trajectories over a workspace with large angular range through the reduction of geometric errors. However, when combining this methodology with feedback control systems, the accuracy of performed manipulations can only be stated within the bounds of the uncertainties in measurement. The dominant sources of error and uncertainty within each sensing subsystem are therefore identified, which leads to a formulation of the measurement uncertainty in the final system outputs, in addition to methods of reducing their magnitude. Specific attention is paid to the analysis of the vision-based subsystem utilised for the measurement of angular displacement. Furthermore, a feedback control scheme is employed to minimise tracking errors, and the coupling of certain measurement errors is shown to have a detrimental effect on the controller operation. The combination of controller tracking errors and measurement uncertainty provides the bounds on the final tracking performance. (paper)

  18. Methodology and measures for preventing unacceptable flow-accelerated corrosion thinning of pipelines and equipment of NPP power generating units

    Science.gov (United States)

    Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.

    2016-10-01

    Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow

  19. Measuring cognitive task demands using dual task methodology, subjective self-ratings, and expert judgments : A Validation Study

    NARCIS (Netherlands)

    Révész, Andrea; Michel, Marije; Gilabert, Roger

    2016-01-01

    This study explored the usefulness of dual-task methodology, self-ratings, and expert judgements in assessing task-generated cognitive demands as a way to provide validity evidence for manipulations of task complexity. The participants were 96 students and 61 ESL teachers. The students, 48 English

  20. Measuring the Differences between Traditional Learning and Game-Based Learning Using Electroencephalography (EEG) Physiologically Based Methodology

    Science.gov (United States)

    Chen, Ching-Huei

    2017-01-01

    Students' cognitive states can reflect a learning experience that results in engagement in an activity. In this study, we used electroencephalography (EEG) physiologically based methodology to evaluate students' levels of attention and relaxation, as well as their learning performance within a traditional and game-based learning context. While no…

  1. Measuring Cognitive Task Demands Using Dual-Task Methodology, Subjective Self-Ratings, and Expert Judgments: A Validation Study

    Science.gov (United States)

    Revesz, Andrea; Michel, Marije; Gilabert, Roger

    2016-01-01

    This study explored the usefulness of dual-task methodology, self-ratings, and expert judgments in assessing task-generated cognitive demands as a way to provide validity evidence for manipulations of task complexity. The participants were 96 students and 61 English as a second language (ESL) teachers. The students, 48 English native speakers and…

  2. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia

    NARCIS (Netherlands)

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by

  3. Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique

    DEFF Research Database (Denmark)

    Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José

    : Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....

  4. A Statistical Approach for Gain Bandwidth Prediction of Phoenix-Cell Based Reflect arrays

    Directory of Open Access Journals (Sweden)

    Hassan Salti

    2018-01-01

    Full Text Available A new statistical approach to predict the gain bandwidth of Phoenix-cell based reflectarrays is proposed. It combines the effects of both main factors that limit the bandwidth of reflectarrays: spatial phase delays and intrinsic bandwidth of radiating cells. As an illustration, the proposed approach is successfully applied to two reflectarrays based on new Phoenix cells.

  5. Data analysis-based autonomic bandwidth adjustment in software defined multi-vendor optical transport networks.

    Science.gov (United States)

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Jing, Ruiquan

    2017-11-27

    Network operators generally provide dedicated lightpaths for customers to meet the demand for high-quality transmission. Considering the variation of traffic load, customers usually rent peak bandwidth that exceeds the practical average traffic requirement. In this case, bandwidth provisioning is unmetered and customers have to pay according to peak bandwidth. Supposing that network operators could keep track of traffic load and allocate bandwidth dynamically, bandwidth can be provided as a metered service and customers would pay for the bandwidth that they actually use. To achieve cost-effective bandwidth provisioning, this paper proposes an autonomic bandwidth adjustment scheme based on data analysis of traffic load. The scheme is implemented in a software defined networking (SDN) controller and is demonstrated in the field trial of multi-vendor optical transport networks. The field trial shows that the proposed scheme can track traffic load and realize autonomic bandwidth adjustment. In addition, a simulation experiment is conducted to evaluate the performance of the proposed scheme. We also investigate the impact of different parameters on autonomic bandwidth adjustment. Simulation results show that the step size and adjustment period have significant influences on bandwidth savings and packet loss. A small value of step size and adjustment period can bring more benefits by tracking traffic variation with high accuracy. For network operators, the scheme can serve as technical support of realizing bandwidth as metered service in the future.

  6. Three-Axis Attitude Estimation With a High-Bandwidth Angular Rate Sensor

    Science.gov (United States)

    Bayard, David S.; Green, Joseph J.

    2013-01-01

    A continuing challenge for modern instrument pointing control systems is to meet the increasingly stringent pointing performance requirements imposed by emerging advanced scientific, defense, and civilian payloads. Instruments such as adaptive optics telescopes, space interferometers, and optical communications make unprecedented demands on precision pointing capabilities. A cost-effective method was developed for increasing the pointing performance for this class of NASA applications. The solution was to develop an attitude estimator that fuses star tracker and gyro measurements with a high-bandwidth angular rotation sensor (ARS). An ARS is a rate sensor whose bandwidth extends well beyond that of the gyro, typically up to 1,000 Hz or higher. The most promising ARS sensor technology is based on a magnetohydrodynamic concept, and has recently become available commercially. The key idea is that the sensor fusion of the star tracker, gyro, and ARS provides a high-bandwidth attitude estimate suitable for supporting pointing control with a fast-steering mirror or other type of tip/tilt correction for increased performance. The ARS is relatively inexpensive and can be bolted directly next to the gyro and star tracker on the spacecraft bus. The high-bandwidth attitude estimator fuses an ARS sensor with a standard three-axis suite comprised of a gyro and star tracker. The estimation architecture is based on a dual-complementary filter (DCF) structure. The DCF takes a frequency- weighted combination of the sensors such that each sensor is most heavily weighted in a frequency region where it has the lowest noise. An important property of the DCF is that it avoids the need to model disturbance torques in the filter mechanization. This is important because the disturbance torques are generally not known in applications. This property represents an advantage over the prior art because it overcomes a weakness of the Kalman filter that arises when fusing more than one rate

  7. Bandwidth-limited control and ringdown suppression in high-Q resonators.

    Science.gov (United States)

    Borneman, Troy W; Cory, David G

    2012-12-01

    We describe how the transient behavior of a tuned and matched resonator circuit and a ringdown suppression pulse may be integrated into an optimal control theory (OCT) pulse-design algorithm to derive control sequences with limited ringdown that perform a desired quantum operation in the presence of resonator distortions of the ideal waveform. Inclusion of ringdown suppression in numerical pulse optimizations significantly reduces spectrometer deadtime when using high quality factor (high-Q) resonators, leading to increased signal-to-noise ratio (SNR) and sensitivity of inductive measurements. To demonstrate the method, we experimentally measure the free-induction decay of an inhomogeneously broadened solid-state free radical spin system at high Q. The measurement is enabled by using a numerically optimized bandwidth-limited OCT pulse, including ringdown suppression, robust to variations in static and microwave field strengths. We also discuss the applications of pulse design in high-Q resonators to universal control of anisotropic-hyperfine coupled electron-nuclear spin systems via electron-only modulation even when the bandwidth of the resonator is significantly smaller than the hyperfine coupling strength. These results demonstrate how limitations imposed by linear response theory may be vastly exceeded when using a sufficiently accurate system model to optimize pulses of high complexity. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Bandwidth allocation for video under quality of service constraints

    CERN Document Server

    Anjum, Bushra

    2014-01-01

    We present queueing-based algorithms to calculate the bandwidth required for a video stream so that the three main Quality of Service constraints, i.e., end-to-end delay, jitter and packet loss, are ensured. Conversational and streaming video-based applications are becoming a major part of the everyday Internet usage. The quality of these applications (QoS), as experienced by the user, depends on three main metrics of the underlying network, namely, end-to-end delay, jitter and packet loss. These metrics are, in turn, directly related to the capacity of the links that the video traffic trave

  9. Adaptive slope compensation for high bandwidth digital current mode controller

    DEFF Research Database (Denmark)

    Taeed, Fazel; Nymand, Morten

    2015-01-01

    An adaptive slope compensation method for digital current mode control of dc-dc converters is proposed in this paper. The compensation slope is used for stabilizing the inner current loop in peak current mode control. In this method, the compensation slope is adapted with the variations...... in converter duty cycle. The adaptive slope compensation provides optimum controller operation in term of bandwidth over wide range of operating points. In this paper operation principle of the controller is discussed. The proposed controller is implemented in an FPGA to control a 100 W buck converter...

  10. OPTIM, Minimization of Band-Width of Finite Elements Problems

    International Nuclear Information System (INIS)

    Huart, M.

    1977-01-01

    1 - Nature of the physical problem solved: To minimize the band-width of finite element problems. 2 - Method of solution: A surface is constructed from the x-y-coordinates of each node using its node number as z-value. This surface consists of triangles. Nodes are renumbered in such a way as to minimize the surface area. 3 - Restrictions on the complexity of the problem: This program is applicable to 2-D problems. It is dimensioned for a maximum of 1000 elements

  11. Modulator reliability and bandwidth improvement: replacing tetrodes with MOSFETs

    International Nuclear Information System (INIS)

    Donaldson, A.R.

    1982-01-01

    Three types of power MOS field effect transistors were studied with the intent of replacing a parallel pair of vacuum tube tetrodes in a linear modulator. The tetrodes have the shortest lifetimes of any other tubes in the system. The FETs offer definite performance advantages when compared to bipolar transistors and definite cost advantages when compared to vacuum tubes. Replacement of the tetrodes does however require careful consideration of voltage, current and to a lesser extent bandwidth capability in order to enhance overall modulator reliability without compromising present performance

  12. Bandwidth scalable, coherent transmitter based on the parallel synthesis of multiple spectral slices using optical arbitrary waveform generation.

    Science.gov (United States)

    Geisler, David J; Fontaine, Nicolas K; Scott, Ryan P; He, Tingting; Paraschis, Loukas; Gerstel, Ori; Heritage, Jonathan P; Yoo, S J B

    2011-04-25

    We demonstrate an optical transmitter based on dynamic optical arbitrary waveform generation (OAWG) which is capable of creating high-bandwidth (THz) data waveforms in any modulation format using the parallel synthesis of multiple coherent spectral slices. As an initial demonstration, the transmitter uses only 5.5 GHz of electrical bandwidth and two 10-GHz-wide spectral slices to create 100-ns duration, 20-GHz optical waveforms in various modulation formats including differential phase-shift keying (DPSK), quaternary phase-shift keying (QPSK), and eight phase-shift keying (8PSK) with only changes in software. The experimentally generated waveforms showed clear eye openings and separated constellation points when measured using a real-time digital coherent receiver. Bit-error-rate (BER) performance analysis resulted in a BER < 9.8 × 10(-6) for DPSK and QPSK waveforms. Additionally, we experimentally demonstrate three-slice, 4-ns long waveforms that highlight the bandwidth scalable nature of the optical transmitter. The various generated waveforms show that the key transmitter properties (i.e., packet length, modulation format, data rate, and modulation filter shape) are software definable, and that the optical transmitter is capable of acting as a flexible bandwidth transmitter.

  13. Methodology for setup and data processing of mobile air quality measurements to assess the spatial variability of concentrations in urban environments

    International Nuclear Information System (INIS)

    Van Poppel, Martine; Peters, Jan; Bleux, Nico

    2013-01-01

    A case study is presented to illustrate a methodology for mobile monitoring in urban environments. A dataset of UFP, PM 2.5 and BC concentrations was collected. We showed that repeated mobile measurements could give insight in spatial variability of pollutants at different micro-environments in a city. Streets of contrasting traffic intensity showed increased concentrations by a factor 2–3 for UFP and BC and by 2.5 . The first quartile (P25) of the mobile measurements at an urban background zone seems to be good estimate of the urban background concentration. The local component of the pollutant concentrations was determined by background correction. The use of background correction reduced the number of runs needed to obtain representative results. The results presented, are a first attempt to establish a methodology for setup and data processing of mobile air quality measurements to assess the spatial variability of concentrations in urban environments. -- Highlights: ► Mobile measurements are used to assess the variability of air pollutants in urban environments. ► PM 2.5 , BC and UFP concentrations are presented for zones with different traffic characteristics. ► A methodology for background correction based on the mobile measurements is presented. ► The background concentration is estimated as the 25th percentile of the urban background data. ► The minimum numbers of runs for a representative estimate is reduced after background correction. -- This paper shows that the spatial variability of air pollutants in an urban environment can be assessed by a mobile monitoring methodology including background correction

  14. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Science.gov (United States)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the

  15. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D. [Fermilab; Bockelman, B. [Nebraska U.; Blomer, J. [CERN; Herner, K. [Fermilab; Levshina, T. [Fermilab; Slyz, M. [Fermilab

    2015-12-23

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached

  16. Implementasi Manajemen Bandwidth Dengan Disiplin Antrian Hierarchical Token Bucket (HTB Pada Sistem Operasi Linux

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2016-09-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  17. IMPLEMENTASI MANAJEMEN BANDWIDTH DENGAN DISIPLIN ANTRIAN HIERARCHICAL TOKEN BUCKET (HTB PADA SISTEM OPERASI LINUX

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2017-01-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  18. Theoretical study of amplified spontaneous emission intensity and bandwidth reduction in polymer

    International Nuclear Information System (INIS)

    Hariri, A.; Sarikhani, S.

    2015-01-01

    Amplified spontaneous emission (ASE), including intensity and bandwidth, in a typical example of BuEH-PPV is calculated. For this purpose, the intensity rate equation is used to explain the reported experimental measurements of a BuEH-PPV sample pumped at different pump intensities from I p = 0.61 MW/cm 2 to 5.2 MW/cm 2 . Both homogeneously and inhomogeneously broadened transition lines along with a model based on the geometrically dependent gain coefficient (GDGC) are examined and it is confirmed that for the reported measurements the homogeneously broadened line is responsible for the light–matter interaction. The calculation explains the frequency spectrum of the ASE output intensity extracted from the sample at different pump intensities, unsaturated and saturated gain coefficients, and ASE bandwidth reduction along the propagation direction. Both analytical and numerical calculations for verifying the GDGC model are presented in this paper. Although the introduced model has shown its potential for explaining the ASE behavior in a specific sample it can be universally used for the ASE study in different active media. (paper)

  19. Bandwidth enhancement of a dual band planar monopole antenna using meandered microstrip feeding.

    Science.gov (United States)

    Ahsan, M R; Islam, M T; Habib Ullah, M; Misran, N

    2014-01-01

    A meandered-microstrip fed circular shaped monopole antenna loaded with vertical slots on a high dielectric material substrate (ε r = 15) is proposed in this paper. The performance criteria of the proposed antenna have been experimentally verified by fabricating a printed prototype. The experimental results show that the proposed antenna has achieved wider bandwidth with satisfactory gain by introducing meandered-microstrip feeding in assistant of partial ground plane. It is observed that, the -10 dB impedance bandwidth of the proposed antenna at lower band is 44.4% (600 MHz-1 GHz) and at upper band is 28% (2.25 GHz-2.95 GHz). The measured maximum gains of -1.18 dBi and 4.87 dBi with maximum radiation efficiencies have been observed at lower band and upper band, respectively. The antenna configuration and parametric study have been carried out with the help of commercially available computer-aided EM simulator, and a good accordance is perceived in between the simulated and measured results. The analysis of performance criteria and almost consistent radiation pattern make the proposed antenna a suitable candidate for UHF RFID, WiMAX, and WLAN applications.

  20. BECSI: Bandwidth Efficient Certificate Status Information Distribution Mechanism for VANETs

    Directory of Open Access Journals (Sweden)

    Carlos Gañán

    2013-01-01

    Full Text Available Certificate revocation is a challenging task, especiallyin mobile network environments such as vehicular ad Hoc networks (VANETs.According to the IEEE 1609.2 security standard for VANETs, public keyinfrastructure (PKI will provide this functionality by means of certificate revocation lists (CRLs.When a certificate authority (CAneeds to revoke a certificate, itglobally distributes CRLs.Transmitting these lists pose a problem as they require high update frequencies and a lot of bandwidth. In this article, we propose BECSI, aBandwidth Efficient Certificate Status Informationmechanism to efficiently distributecertificate status information (CSI in VANETs.By means of Merkle hash trees (MHT, BECSI allowsto retrieve authenticated CSI not onlyfrom the infrastructure but also from vehicles actingas mobile repositories.Since these MHTs are significantly smaller than the CRLs, BECSIreduces the load on the CSI repositories and improves the response time for the vehicles.Additionally, BECSI improves the freshness of the CSIby combining the use of delta-CRLs with MHTs.Thus, vehicles that have cached the most current CRLcan download delta-CRLs to have a complete list of revoked certificates.Once a vehicle has the whole list of revoked certificates, it can act as mobile repository.

  1. Bandwidth Extension of Telephone Speech Aided by Data Embedding

    Directory of Open Access Journals (Sweden)

    Sagi Ariel

    2007-01-01

    Full Text Available A system for bandwidth extension of telephone speech, aided by data embedding, is presented. The proposed system uses the transmitted analog narrowband speech signal as a carrier of the side information needed to carry out the bandwidth extension. The upper band of the wideband speech is reconstructed at the receiving end from two components: a synthetic wideband excitation signal, generated from the narrowband telephone speech and a wideband spectral envelope, parametrically represented and transmitted as embedded data in the telephone speech. We propose a novel data embedding scheme, in which the scalar Costa scheme is combined with an auditory masking model allowing high rate transparent embedding, while maintaining a low bit error rate. The signal is transformed to the frequency domain via the discrete Hartley transform (DHT and is partitioned into subbands. Data is embedded in an adaptively chosen subset of subbands by modifying the DHT coefficients. In our simulations, high quality wideband speech was obtained from speech transmitted over a telephone line (characterized by spectral magnitude distortion, dispersion, and noise, in which side information data is transparently embedded at the rate of 600 information bits/second and with a bit error rate of approximately . In a listening test, the reconstructed wideband speech was preferred (at different degrees over conventional telephone speech in of the test utterances.

  2. Bandwidth Extension of Telephone Speech Aided by Data Embedding

    Directory of Open Access Journals (Sweden)

    David Malah

    2007-01-01

    Full Text Available A system for bandwidth extension of telephone speech, aided by data embedding, is presented. The proposed system uses the transmitted analog narrowband speech signal as a carrier of the side information needed to carry out the bandwidth extension. The upper band of the wideband speech is reconstructed at the receiving end from two components: a synthetic wideband excitation signal, generated from the narrowband telephone speech and a wideband spectral envelope, parametrically represented and transmitted as embedded data in the telephone speech. We propose a novel data embedding scheme, in which the scalar Costa scheme is combined with an auditory masking model allowing high rate transparent embedding, while maintaining a low bit error rate. The signal is transformed to the frequency domain via the discrete Hartley transform (DHT and is partitioned into subbands. Data is embedded in an adaptively chosen subset of subbands by modifying the DHT coefficients. In our simulations, high quality wideband speech was obtained from speech transmitted over a telephone line (characterized by spectral magnitude distortion, dispersion, and noise, in which side information data is transparently embedded at the rate of 600 information bits/second and with a bit error rate of approximately 3⋅10−4. In a listening test, the reconstructed wideband speech was preferred (at different degrees over conventional telephone speech in 92.5% of the test utterances.

  3. Bandwidth Optimization in Centralized WLANs for Different Traffic Types

    Directory of Open Access Journals (Sweden)

    Haines RJ

    2007-01-01

    Full Text Available Allocating bandwidth between different forms of coexisting traffic (such as web-browsing, streaming, and telephony within a wireless LAN is a challenging and interesting problem. Centralized coordination functions in wireless LANs offer several advantages over distributed approaches, having the benefit of a system overview at the controller, but obtaining a stable configuration of bandwidth allocation for the system is nontrivial. We present, review, and compare different mechanisms to achieve this end, and a number of different means of obtaining the configurations themselves. We describe an analytical model of the system under consideration and present two mathematical approaches to derive solutions for any system configuration and deployment, along with an adaptive feedback-based solution. We also describe a comprehensive simulation-based model for the problem, and a prototype that allows comparison of these approaches. Our investigations demonstrate that a self-adaptive dynamic approach far outperforms any static scheme, and that using a mathematical model to produce the configurations themselves confers several advantages.

  4. Flexible power and bandwidth allocation in mobile satellites

    Science.gov (United States)

    Keyes, L. A.

    The introduction of L-band mobile communication services by spot beam satellites creates a payload design challenge due to uncertainty in the location and size of the new market to be served. A combination of payload technologies that allow a flexible allocation of power and bandwidth to any portion of the coverage area is described. Power flexibility is achieved by a novel combination of a low-level beam-forming network and a matrix power module which ensures equal sharing of power among individual amplifiers. This eliminates the loss of efficiency and increased mass when an amplifier associated with a beam must be over-designed to meet uncertainties in power distribution between beams. Flexibility in allocation of bandwidth to beams is achieved by intermediate frequency subdivision of the L-band service categories defined by ITU. These spectral subdivisions are assigned to beams by an IF interconnect matrix having beam ports and filter ports as inputs and outputs, respectively. Two such filter switch matrices are required, one for the inbound L-band to feeder link transponder, and one for the outbound feeder link to L-band transponder.

  5. Are methodological quality and completeness of reporting associated with citation-based measures of publication impact? A secondary analysis of a systematic review of dementia biomarker studies.

    Science.gov (United States)

    Mackinnon, Shona; Drozdowska, Bogna A; Hamilton, Michael; Noel-Storr, Anna H; McShane, Rupert; Quinn, Terry

    2018-03-22

    To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies. We assessed dementia biomarker studies included in a previous systematic review in terms of methodological and reporting quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and Standards for Reporting of Diagnostic Accuracy (STARD), respectively. We extracted additional study and journal-related data from each publication to account for factors shown to be associated with impact in previous research. We explored associations between potential determinants and measures of publication impact in univariable and stepwise multivariable linear regression analyses. We aimed to collect data on four measures of publication impact: two traditional measures-average number of citations per year and 5-year impact factor of the publishing journal and two alternative measures-the Altmetric Attention Score and counts of electronic downloads. The systematic review included 142 studies. Due to limited data, Altmetric Attention Scores and electronic downloads were excluded from the analysis, leaving traditional metrics as the only analysed outcome measures. We found no relationship between QUADAS and traditional metrics. Citation rates were independently associated with 5-year journal impact factor (β=0.42; pcitation rates (β=0.45; pCitation rates and 5-year journal impact factor appear to measure different dimensions of impact. Citation rates were weakly associated with completeness of reporting, while neither traditional metric was related to methodological rigour. Our results suggest that high publication usage and journal outlet is not a guarantee of quality and readers should critically appraise all papers regardless of presumed impact. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted

  6. A fast hybrid methodology based on machine learning, quantum methods, and experimental measurements for evaluating material properties

    Science.gov (United States)

    Kong, Chang Sun; Haverty, Michael; Simka, Harsono; Shankar, Sadasivan; Rajan, Krishna

    2017-09-01

    We present a hybrid approach based on both machine learning and targeted ab-initio calculations to determine adhesion energies between dissimilar materials. The goals of this approach are to complement experimental and/or all ab-initio computational efforts, to identify promising materials rapidly and identify in a quantitative manner the relative contributions of the different material attributes affecting adhesion. Applications of the methodology to predict bulk modulus, yield strength, adhesion and wetting properties of copper (Cu) with other materials including metals, nitrides and oxides is discussed in this paper. In the machine learning component of this methodology, the parameters that were chosen can be roughly divided into four types: atomic and crystalline parameters (which are related to specific elements such as electronegativities, electron densities in Wigner-Seitz cells); bulk material properties (e.g. melting point), mechanical properties (e.g. modulus) and those representing atomic characteristics in ab-initio formalisms (e.g. pseudopotentials). The atomic parameters are defined over one dataset to determine property correlation with published experimental data. We then develop a semi-empirical model across multiple datasets to predict adhesion in material interfaces outside the original datasets. Since adhesion is between two materials, we appropriately use parameters which indicate differences between the elements that comprise the materials. These semi-empirical predictions agree reasonably with the trend in chemical work of adhesion predicted using ab-initio techniques and are used for fast materials screening. For the screened candidates, the ab-initio modeling component provides fundamental understanding of the chemical interactions at the interface, and explains the wetting thermodynamics of thin Cu layers on various substrates. Comparison against ultra-high vacuum (UHV) experiments for well-characterized Cu/Ta and Cu/α-Al2O3 interfaces is

  7. Atmospheric aerosol in an urban area: Comparison of measurement instruments and methodologies and pulmonary deposition assessment; Aerosol atmosferico in area urbanae di misura e valutazione di deposizione polmonare

    Energy Technology Data Exchange (ETDEWEB)

    Berico, M; Luciani, A; Formignani, M [ENEA, Centro Ricerche Bologna (Italy). Dip. Ambiente

    1996-07-01

    In March 1995 a measurement campaign of atmospheric aerosol in the Bologna urban area (Italy) was carried out. A transportable laboratory, set up by ENEA (Italian national Agency for New Technologies, Energy and the Environment) Environmental Department (Bologna), was utilized with instruments for measurement of atmospheric aerosol and meteorological parameters. The aim of this campaign was of dual purpose: to characterize aerosol in urban area and to compare different instruments and methodologies of measurements. Mass concentrations measurements, evaluated on a 23-hour period with total filter, PM10 dichotomous sampler and low pressure impactor (LPI Berner), have provided information respectively about total suspended particles, respirable fraction and granulometric parameters of aerosol. Eight meteorologic parameters, number concentration of submicromic fraction of aerosol and mass concentration of micromic fraction have been continually measured. Then, in a daytime period, several number granulometries of atmospheric aerosol have also been estimated by means of diffusion battery system. Results related to different measurement methodologies and granulometric characteristics of aerosol are presented here. Pulmonary deposition of atmospheric aerosol is finally calculated, using granulometries provided by LPI Brener and ICRP 66 human respiratory tract model.

  8. Exploit the Bandwidth Capacities of the Perfluorinated Graded Index Polymer Optical Fiber for Multi-Services Distribution

    Directory of Open Access Journals (Sweden)

    Paul Alain Rolland

    2011-06-01

    Full Text Available The study reported here deals with the exploitation of perfluorinated graded index polymer optical fiber bandwidth to add further services in a home/office network. The fiber properties are exhibited in order to check if perfluorinated graded index plastic optical fiber (PFGI-POF is suitable to support a multiplexing transmission. According to the high bandwidth length of plastic fibers, both at 850 nm and 1,300 nm, the extension of the classical baseband existing network is proposed to achieve a dual concept, allowing the indoor coverage of wireless signals transmitted using the Radio over Fiber technology. The simultaneous transmission of a 10 GbE signal and a wireless signal is done respectively at 850 nm and 1,300 nm on a single plastic fiber using wavelength division multiplexing commercially available devices. The penalties have been evaluated both in digital (Bit Error Rate measurement and radiofrequency (Error Vector Magnitude measurement domains.

  9. Methodology for assessing the probability of corrosion in concrete structures on the basis of half-cell potential and concrete resistivity measurements.

    Science.gov (United States)

    Sadowski, Lukasz

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  10. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    Directory of Open Access Journals (Sweden)

    Lukasz Sadowski

    2013-01-01

    Full Text Available In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential Ecorr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  11. High Bandwidth Optical Links for Micro-Satellite Support

    Science.gov (United States)

    Chao, Tien-Hsin (Inventor); Wilson, Keith E. (Inventor); Coste, Keith (Inventor)

    2016-01-01

    A method, systems, apparatus and device enable high bandwidth satellite communications. An onboard tracking detector, installed in a low-earth orbit satellite, detects a position of an incoming optical beam received/transmitted from a first ground station of one or more ground stations. Tracker electronics determine orientation information of the incoming optical beam based on the position. Control electronics receive the orientation information from the tracker electronics, and control a waveguide drive electronics. The waveguide drive electronics control a voltage that is provided to an electro-optic waveguide beam steering device. The electro-optic waveguide beam steering device steers an outgoing optical beam to one of the one or more ground stations based on the voltage.

  12. Bandwidth Efficient Hybrid Synchronization for Wireless Sensor Network

    DEFF Research Database (Denmark)

    Dnyaneshwar, Mantri; Prasad, Neeli R.; Prasad, Ramjee

    2015-01-01

    Data collection and transmission are the fundamental operations of Wireless Sensor Networks (WSNs). A key challenge in effective data collection and transmission is to schedule and synchronize the activities of the nodes with the global clock. This paper proposes the Bandwidth Efficient Hybrid...... in the network and then perform the pair-wise synchronization. With the mobility of node, the structure frequently changes causing an increase in energy consumption. To mitigate the problem BESDA aggregate data with the notion of a global timescale throughout the network and schedule based time-division multiple...... accesses (TDMA) techniques as MAC layer protocol. It reduces the collision of packets. Simulation results show that BESDA is energy efficient, with increased throughput, and has less delay as compared with state-of-the-art....

  13. Graphene metascreen for designing compact infrared absorbers with enhanced bandwidth

    KAUST Repository

    Chen, Pai-Yen; Farhat, Mohamed; Bagci, Hakan

    2015-01-01

    We propose a compact, wideband terahertz and infrared absorber, comprising a patterned graphene sheet on a thin metal-backed dielectric slab. This graphene-based nanostructure can achieve a low or negative effective permeability, necessary for realizing the perfect absorption. The dual-reactive property found in both the plasmonic graphene sheet and the grounded highpermittivity slab introduces extra poles into the equivalent circuit model of the system, thereby resulting in a dual-band or broadband magnetic resonance that enhances the absorption bandwidth. More interestingly, the two-dimensional patterned graphene sheet significantly simplifies the design and fabrication processes for achieving resonant magnetic response, and allows the frequency-reconfigurable operation via electrostatic gating.

  14. A Hybrid ACO Approach to the Matrix Bandwidth Minimization Problem

    Science.gov (United States)

    Pintea, Camelia-M.; Crişan, Gloria-Cerasela; Chira, Camelia

    The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated {NP}-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.

  15. Narrow bandwidth detection of vibration signature using fiber lasers

    Science.gov (United States)

    Moore, Sean; Soh, Daniel B.S.

    2018-05-08

    The various technologies presented herein relate to extracting a portion of each pulse in a series of pulses reflected from a target to facilitate determination of a Doppler-shifted frequency for each pulse and, subsequently, a vibration frequency for the series of pulses. Each pulse can have a square-wave configuration, whereby each pulse can be time-gated to facilitate discarding the leading edge and the trailing edge (and associated non-linear effects) of each pulse and accordingly, capture of the central portion of the pulse from which the Doppler-shifted frequency, and ultimately, the vibration frequency of the target can be determined. Determination of the vibration velocity facilitates identification of the target being in a state of motion. The plurality of pulses can be formed from a laser beam (e.g., a continuous wave), the laser beam having a narrow bandwidth.

  16. Graphene metascreen for designing compact infrared absorbers with enhanced bandwidth

    KAUST Repository

    Chen, Pai-Yen

    2015-03-31

    We propose a compact, wideband terahertz and infrared absorber, comprising a patterned graphene sheet on a thin metal-backed dielectric slab. This graphene-based nanostructure can achieve a low or negative effective permeability, necessary for realizing the perfect absorption. The dual-reactive property found in both the plasmonic graphene sheet and the grounded highpermittivity slab introduces extra poles into the equivalent circuit model of the system, thereby resulting in a dual-band or broadband magnetic resonance that enhances the absorption bandwidth. More interestingly, the two-dimensional patterned graphene sheet significantly simplifies the design and fabrication processes for achieving resonant magnetic response, and allows the frequency-reconfigurable operation via electrostatic gating.

  17. Auction-based bandwidth allocation in the Internet

    Science.gov (United States)

    Wei, Jiaolong; Zhang, Chi

    2002-07-01

    It has been widely accepted that auctioning which is the pricing approach with minimal information requirement is a proper tool to manage scare network resources. Previous works focus on Vickrey auction which is incentive compatible in classic auction theory. In the beginning of this paper, the faults of the most representative auction-based mechanisms are discussed. And then a new method called uniform-price auction (UPA), which has the simplest auction rule is proposed and it's incentive compatibility in the network environment is also proved. Finally, the basic mode is extended to support applications which require minimum bandwidth guarantees for a given time period by introducing derivative market, and a market mechanism for network resource allocation which is predictable, riskless, and simple for end-users is completed.

  18. Deep neural network-based bandwidth enhancement of photoacoustic data.

    Science.gov (United States)

    Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K

    2017-11-01

    Photoacoustic (PA) signals collected at the boundary of tissue are always band-limited. A deep neural network was proposed to enhance the bandwidth (BW) of the detected PA signal, thereby improving the quantitative accuracy of the reconstructed PA images. A least square-based deconvolution method that utilizes the Tikhonov regularization framework was used for comparison with the proposed network. The proposed method was evaluated using both numerical and experimental data. The results indicate that the proposed method was capable of enhancing the BW of the detected PA signal, which inturn improves the contrast recovery and quality of reconstructed PA images without adding any significant computational burden. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Bandwidth Reservation Using Velocity and Handoff Statistics for Cellular Networks

    Institute of Scientific and Technical Information of China (English)

    Chuan-Lin Zhang; Kam Yiu Lam; Wei-Jia Jia

    2006-01-01

    The percentages of blocking and forced termination rates as parameters representing quality of services (QoS)requirements are presented. The relation between the connection statistics of mobile users in a cell and the handoff number and new call number in next duration in each cell is explored. Based on the relation, statistic reservation tactics are raised.The amount of bandwidth for new calls and handoffs of each cell in next period is determined by using the strategy. Using this method can guarantee the communication system suits mobile connection request dynamic. The QoS parameters:forced termination rate and blocking rate can be maintained steadily though they may change with the offered load. Some numerical experiments demonstrate this is a practical method with affordable overhead.

  20. Fibre Bragg grating based accelerometer with extended bandwidth

    International Nuclear Information System (INIS)

    Basumallick, Nandini; Biswas, Palas; Dasgupta, Kamal; Bandyopadhyay, Somnath; Chakraborty, Rajib; Chakraborty, Sushanta

    2016-01-01

    We have shown experimentally that the operable bandwidth of a fibre Bragg grating (FBG) based accelerometer can be extended significantly, without compromising its sensitivity, using a post-signal processing technique which involves frequency domain weighting. It has been demonstrated that using the above technique acceleration can be correctly interpreted even when the operating frequency encroaches on the region where the frequency response of the sensor is non-uniform. Two different excitation signals, which we often encounter in structural health monitoring applications, e.g. (i) a signal composed of multi-frequency components and (ii) a sinusoidal excitation with a frequency sweep, have been considered in our experiment. The results obtained have been compared with a piezo accelerometer. (paper)

  1. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  2. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    International Nuclear Information System (INIS)

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-01-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON TM thermoplastic mask and the Type-S TM head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors (Σ sub and σ sub ). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins ranging from 5.1 to 6.6 mm (A

  3. Methodological Issues in the Validation of Implicit Measures: Comment on De Houwer, Teige-Mocigemba, Spruyt, and Moors (2009)

    Science.gov (United States)

    Gawronski, Bertram; LeBel, Etienne P.; Peters, Kurt R.; Banse, Rainer

    2009-01-01

    J. De Houwer, S. Teige-Mocigemba, A. Spruyt, and A. Moors's normative analysis of implicit measures provides an excellent clarification of several conceptual ambiguities surrounding the validation and use of implicit measures. The current comment discusses an important, yet unacknowledged, implication of J. De Houwer et al.'s analysis, namely,…

  4. Testing methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  5. Testing methodologies

    International Nuclear Information System (INIS)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical ''signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs

  6. Dynamic Bandwidth Allocation with Effective Utilization of Polling Interval over WDM/TDM PON

    Science.gov (United States)

    Ni, Cuiping; Gan, Chaoqin; Gao, Ziyue

    2014-12-01

    WDM/TDM (wavelength-division multiplexing/time-division multiplexing) PON (passive optical network) appears to be an attractive solution for the next generation optical access networks. Dynamic bandwidth allocation (DBA) plays a crucial role in efficiently and fairly allocating the bandwidth among all users in WDM/TDM PON. In this paper, two dynamic bandwidth allocation schemes (DBA1 and DBA2) are proposed to eliminate the idle time of polling cycles (i.e. polling interval), improve bandwidth utilization and make full use of bandwidth resources. The two DBA schemes adjust the time slot of sending request information and make fair scheduling among users to achieve the effective utilization of polling interval in WDM/TDM PON. The simulation and theoretical analyses verify that the proposed schemes outperform the conventional DBA scheme. We also make comparisons between the two schemes in terms of bandwidth utilization and average packet delay to further demonstrate the effectiveness of the scheme of DBA2.

  7. Measuring performance in off-patent drug markets: a methodological framework and empirical evidence from twelve EU Member States.

    Science.gov (United States)

    Kanavos, Panos

    2014-11-01

    This paper develops a methodological framework to help evaluate the performance of generic pharmaceutical policies post-patent expiry or after loss of exclusivity in non-tendering settings, comprising five indicators (generic availability, time delay to and speed of generic entry, number of generic competitors, price developments, and generic volume share evolution) and proposes a series of metrics to evaluate performance. The paper subsequently tests this framework across twelve EU Member States (MS) by using IMS data on 101 patent expired molecules over the 1998-2010 period. Results indicate that significant variation exists in generic market entry, price competition and generic penetration across the study countries. Size of a geographical market is not a predictor of generic market entry intensity or price decline. Regardless of geographic or product market size, many off patent molecules lack generic competitors two years after loss of exclusivity. The ranges in each of the five proposed indicators suggest, first, that there are numerous factors--including institutional ones--contributing to the success of generic entry, price decline and market penetration and, second, MS should seek a combination of supply and demand-side policies in order to maximise cost-savings from generics. Overall, there seems to be considerable potential for faster generic entry, uptake and greater generic competition, particularly for molecules at the lower end of the market. Copyright © 2014. Published by Elsevier Ireland Ltd.

  8. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    International Nuclear Information System (INIS)

    Jahanshahi, Mohammad R; Masri, Sami F

    2013-01-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one. (paper)

  9. Addressing the “It Is Just Placebo” Pitfall in CAM: Methodology of a Project to Develop Patient-Reported Measures of Nonspecific Factors in Healing

    Directory of Open Access Journals (Sweden)

    Carol M. Greco

    2013-01-01

    Full Text Available CAM therapies are often dismissed as “no better than placebo;” however, this belief may be overcome through careful analysis of nonspecific factors in healing. To improve trial methodology, we propose that CAM (and conventional RCTs should evaluate and adjust for the effects of intrapersonal, interpersonal, and environmental factors on outcomes. However, measurement of these is challenging, and there are no brief, precise instruments that are suitable for widespread use in trials and clinical settings. This paper describes the methodology of a project to develop a set of patient-reported instruments that will quantify the nonspecific or “placebo” effects that are in fact specific and active ingredients in healing. The project uses the rigorous instrument-development methodology of the NIH-PROMIS initiative. The methods include (1 integration of patients’ and clinicians’ opinions with existing literature; (2 development of relevant items; (3 calibration of items on large samples; (4 classical test theory and modern psychometric methods to select the most useful items; (5 development of computerized adaptive tests (CATs that maximize information while minimizing patient burden; and (6 initial validation studies. The instruments will have the potential to revolutionize clinical trials in both CAM and conventional medicine through quantifying contextual factors that contribute to healing.

  10. Measuring alterations in oscillatory brain networks in schizophrenia with resting-state MEG: State-of-the-art and methodological challenges.

    Science.gov (United States)

    Alamian, Golnoush; Hincapié, Ana-Sofía; Pascarella, Annalisa; Thiery, Thomas; Combrisson, Etienne; Saive, Anne-Lise; Martel, Véronique; Althukov, Dmitrii; Haesebaert, Frédéric; Jerbi, Karim

    2017-09-01

    Neuroimaging studies provide evidence of disturbed resting-state brain networks in Schizophrenia (SZ). However, untangling the neuronal mechanisms that subserve these baseline alterations requires measurement of their electrophysiological underpinnings. This systematic review specifically investigates the contributions of resting-state Magnetoencephalography (MEG) in elucidating abnormal neural organization in SZ patients. A systematic literature review of resting-state MEG studies in SZ was conducted. This literature is discussed in relation to findings from resting-state fMRI and EEG, as well as to task-based MEG research in SZ population. Importantly, methodological limitations are considered and recommendations to overcome current limitations are proposed. Resting-state MEG literature in SZ points towards altered local and long-range oscillatory network dynamics in various frequency bands. Critical methodological challenges with respect to experiment design, and data collection and analysis need to be taken into consideration. Spontaneous MEG data show that local and global neural organization is altered in SZ patients. MEG is a highly promising tool to fill in knowledge gaps about the neurophysiology of SZ. However, to reach its fullest potential, basic methodological challenges need to be overcome. MEG-based resting-state power and connectivity findings could be great assets to clinical and translational research in psychiatry, and SZ in particular. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  11. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses

  12. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses

  13. Methodology and experimental setup for measuring short-lives fission product yields in actinides induced fission by charged particles

    International Nuclear Information System (INIS)

    Bellido, A.V.

    1995-07-01

    The theoretical principles and the laboratory set-up for the fission products yields measurements are described. The procedures for the experimental determinations are explain in detail. (author). 43 refs., 5 figs

  14. A Methodology for Measuring the Physiological Strain of Enhanced Soldiers: The 1998 Soldier Combat System Enhancement Study

    National Research Council Canada - National Science Library

    Amos, Denys

    1998-01-01

    ... or enhanced capabilities conducting routine operations in the tropics. Core temperature, mean skin temperature and heart rate are appropriate measures for evaluating the physiological burden of soldier combat system enhancements...

  15. Methodological issues in the estimation of parental time – Analysis of measures in a Canadian time-use survey

    OpenAIRE

    Cara B. Fedick; Shelley Pacholok; Anne H. Gauthier

    2005-01-01

    Extensive small scale studies have documented that when people assume the role of assisting a person with impairments or an older person, care activities account for a significant portion of their daily routines. Nevertheless, little research has investigated the problem of measuring the time that carers spend in care-related activities. This paper contrasts two different measures of care time – an estimated average weekly hours question in the 1998 Australian Survey of Disability, Ageing and...

  16. Manajemen Bandwidth Simple Queue dan Queue Tree pada PT. Endorsindo Makmur Selaras

    OpenAIRE

    Budiman, Arif

    2015-01-01

      The purpose of this study is to analyze and optimize the bandwidth management at PT. Endorsindo Makmur Selaras, with the expectation that the distribution of bandwidth can be evenly distributed to each employee so that the employee can improve performance and quality of the company. Research methods used include analysis methods (survey and interview system that runs directly on the user) and to optimize bandwidth management method to configure the proxy using the Queue Tree. The re...

  17. A Reactance Compensated Three-Device Doherty Power Amplifier for Bandwidth and Back-Off Range Extension

    Directory of Open Access Journals (Sweden)

    Shichang Chen

    2018-01-01

    Full Text Available This paper proposes a new broadband Doherty power amplifier topology with extended back-off range. A shunted λ/4 short line or λ/2 open line working as compensating reactance is introduced to the conventional load modulation network, which greatly improves its bandwidth. Underlying bandwidth extension mechanism of the proposed configuration is comprehensively analyzed. A three-device Doherty power amplifier is implemented for demonstration based on Cree’s 10 W HEMTs. Measurements show that at least 41% drain efficiency is maintained from 2.0 GHz to 2.6 GHz at 8 dB back-off range. In the same operating band, saturation power is larger than 43.6 dBm and drain efficiency is higher than 53%.

  18. Research on sorption behavior of radionuclides under shallow land environment. Mechanism and standard methodologies for measurement of distribution coefficients of radionuclides

    International Nuclear Information System (INIS)

    Sakamoto, Yoshiaki; Tanaka, Tadao; Takebe, Shinichi; Nagao, Seiya; Ogawa, Hiromichi; Komiya, Tomokazu; Hagiwara, Shigeru

    2001-01-01

    This study consists of two categories' research works. One is research on sorption mechanism of radionuclides with long half-life, which are Technetium-99, TRU elements and U series radionuclides, on soil and rocks, including a development of database of distribution coefficients of radionuclides. The database on the distribution coefficients of radionuclides with information about measurement conditions, such as shaking method, soil characteristics and solution composition, has been already opened to the public (JAERI-DATABASE 20001003). Another study is investigation on a standard methodology of the distribution coefficient of radionuclide on soils, rocks and engineering materials in Japan. (author)

  19. Self-Other Differences in Student Drinking Norms Research: The Role of Impression Management, Self-Deception, and Measurement Methodology.

    Science.gov (United States)

    Melson, Ambrose J; Monk, Rebecca Louise; Heim, Derek

    2016-12-01

    Data-driven student drinking norms interventions are based on reported normative overestimation of the extent and approval of an average student's drinking. Self-reported differences between personal and perceived normative drinking behaviors and attitudes are taken at face value as evidence of actual levels of overestimation. This study investigates whether commonly used data collection methods and socially desirable responding (SDR) may inadvertently impede establishing "objective" drinking norms. U.K. students (N = 421; 69% female; mean age 20.22 years [SD = 2.5]) were randomly assigned to 1 of 3 versions of a drinking norms questionnaire: The standard multi-target questionnaire assessed respondents' drinking attitudes and behaviors (frequency of consumption, heavy drinking, units on a typical occasion) as well as drinking attitudes and behaviors for an "average student." Two deconstructed versions of this questionnaire assessed identical behaviors and attitudes for participants themselves or an "average student." The Balanced Inventory of Desirable Responding was also administered. Students who answered questions about themselves and peers reported more extreme perceived drinking attitudes for the average student compared with those reporting solely on the "average student." Personal and perceived reports of drinking behaviors did not differ between multitarget and single-target versions of the questionnaire. Among those who completed the multitarget questionnaire, after controlling for demographics and weekly drinking, SDR was related positively with the magnitude of difference between students' own reported behaviors/attitudes and those perceived for the average student. Standard methodological practices and socially desirable responding may be sources of bias in peer norm overestimation research. Copyright © 2016 by the Research Society on Alcoholism.

  20. Bandwidth tunable microwave photonic filter based on digital and analog modulation

    Science.gov (United States)

    Zhang, Qi; Zhang, Jie; Li, Qiang; Wang, Yubing; Sun, Xian; Dong, Wei; Zhang, Xindong

    2018-05-01

    A bandwidth tunable microwave photonic filter based on digital and analog modulation is proposed and experimentally demonstrated. The digital modulation is used to broaden the effective gain spectrum and the analog modulation is to get optical lines. By changing the symbol rate of data pattern, the bandwidth is tunable from 50 MHz to 700 MHz. The interval of optical lines is set according to the bandwidth of gain spectrum which is related to the symbol rate. Several times of bandwidth increase are achieved compared to a single analog modulation and the selectivity of the response is increased by 3.7 dB compared to a single digital modulation.

  1. The Prediction of Bandwidth On Need Computer Network Through Artificial Neural Network Method of Backpropagation

    Directory of Open Access Journals (Sweden)

    Ikhthison Mekongga

    2014-02-01

    Full Text Available The need for bandwidth has been increasing recently. This is because the development of internet infrastructure is also increasing so that we need an economic and efficient provider system. This can be achieved through good planning and a proper system. The prediction of the bandwidth consumption is one of the factors that support the planning for an efficient internet service provider system. Bandwidth consumption is predicted using ANN. ANN is an information processing system which has similar characteristics as the biologic al neural network.  ANN  is  chosen  to  predict  the  consumption  of  the  bandwidth  because  ANN  has  good  approachability  to  non-linearity.  The variable used in ANN is the historical load data. A bandwidth consumption information system was built using neural networks  with a backpropagation algorithm to make the use of bandwidth more efficient in the future both in the rental rate of the bandwidth and in the usage of the bandwidth.Keywords: Forecasting, Bandwidth, Backpropagation

  2. Bandwidth-narrowed Bragg gratings inscribed in double-cladding fiber by femtosecond laser.

    Science.gov (United States)

    Shi, Jiawei; Li, Yuhua; Liu, Shuhui; Wang, Haiyan; Liu, Ningliang; Lu, Peixiang

    2011-01-31

    Bragg gratings with the bandwidth(FWHM) narrowed up to 79 pm were inscribed in double-cladding fiber with femtosecond radiation and a phase mask followed by an annealing treatment. With the annealing temperature below a critical value, the bandwidth of Bragg gratings induced by Type I-IR and Type II-IR index change was narrowed without the reduction of reflectivity. The bandwidth narrowing is due to the profile transformation of the refractive index modulation caused by the annealing treatment. This mechanism was verified by comparing bandwidth narrowing processes of FBGs written with different power densities.

  3. In situ and laboratory measurements of very low permeability in the Tournemine argilites (Aveyron). Comparison of methodologies and scale effect

    International Nuclear Information System (INIS)

    Boisson, J.Y.; Cabrera, J.

    1998-01-01

    At the request of the Institut de Protection et de Surete Nucleaire (IPSN - Institute of Nuclear Safety and Protection), ANTEA visited the Tournemire site (Aveyron) to carry out an hydraulic characterization of the 200 m-thick Toarcian and Domerian formations accessible by tunnel. Permeability measurements were made using the borehole pulse-test method either in the global hole or perpendicular to more permeable fractured zones. The tests yielded an approximate value for the hydraulic head and an order of magnitude for the permeability at 1 to 10 metre scale (10 -11 to 10 -13 m/s). A borehole was then equipped for a long-duration (6 months) measurement of the hydraulic head in the rock body. Laboratory measurements were made on 4 cm-diameter core samples taken from different boreholes. The tests, carried out under triaxial stress, required preliminary saturation-consolidation of the test samples. Through applying steady-state flow or hydraulic pulse, it was possible to measure a permeability in order of 10 -14 m/s for the matrix of the clayey material. The difference between laboratory and in situ values is explained by the presence of fractures in the rock body. Moreover, it seems that the hydraulic conditions of measurement in the field around the hole could have an influence on the final result. (authors)

  4. New methodological approaches to the simultaneous measurement of the 90Sr and 137Cs activity in environmental samples

    Directory of Open Access Journals (Sweden)

    M. V. Zheltonozhska

    2012-12-01

    Full Text Available Nonradiochemical method of measurement of 90Sr and137Cs activity in environmental samples is proposed. This method is based on spectrometrical investigation of electrons accompanied the decay of the 90Sr and137Cs. Accounting for the contribution to the total activity of the samples from the zones with the density of the contamination 1 - 5 Кu/km2 the 40K electrons allowed to improve the accuracy of the measurements for the samples of small rodents up to 15 - 20 % (the ratio of A (137Cs/A (90Sr was from 2 to 100, for samples of soil up to 10 - 15 % (the change of activity in these samples was ten thousand times. The results of the spectrometric measurements were confirmed by the traditional radiochemical research.

  5. Methodological constraints in interpreting serum paraoxonase-1 activity measurements: an example from a study in HIV-infected patients

    Directory of Open Access Journals (Sweden)

    Joven Jorge

    2010-03-01

    Full Text Available Abstract Background Paraoxonase-1 (PON1 is an antioxidant enzyme that attenuates the production of the monocyte chemoattractant protein-1 (MCP-1 in vitro. Although oxidation and inflammation are closely related processes, the association between PON1 and MCP-1 has not been completely characterised due, probably, to that the current use of synthetic substrates for PON1 measurement limits the interpretation of the data. In the present study, we explored the relationships between the circulating levels of PON1 and MCP-1 in human immunodeficiency virus-infected patients in relation to the multifunctional capabilities of PON1. Methods We measured selected variables in 227 patients and in a control group of 409 participants. Serum PON1 esterase and lactonase activities were measured as the rates of hydrolysis of paraoxon and of 5-(thiobutyl-butyrolactone, respectively. Oxidised LDL and MCP-1 concentrations were determined by enzyme-linked immunosorbent assay. High-density lipoproteins cholesterol, apolipoprotein A-I, and C-reactive protein concentrations were measured by standard automated methods. Results There were significant relationships between PON1 activity and several indices of oxidation and inflammation in control subjects and in infected patients. However, these relationships varied not only with disease status but also on the type of substrate used for PON1 measurement. Conclusion The present study is a cautionary tale highlighting that results of clinical studies on PON1 may vary depending on the methods used as well as the disease studied. Until more specific methods using physiologically-akin substrates are developed for PON1 measurement, we suggest the simultaneous employment of at least two different substrates in order to improve the reliability of the results obtained.

  6. Conception and development of an optical methodology applied to long-distance measurement of suspension bridges dynamic displacement

    International Nuclear Information System (INIS)

    Martins, L Lages; Ribeiro, A Silva; Rebordão, J M

    2013-01-01

    This paper describes the conception and development of an optical system applied to suspension bridge structural monitoring, aiming real-time and long-distance measurement of dynamical three-dimensional displacement, namely, in the central section of the main span. The main innovative issues related to this optical approach are described and a comparison with other optical and non-optical measurement systems is performed. Moreover, a computational simulator tool developed for the optical system design and validation of the implemented image processing and calculation algorithms is also presented

  7. Computer vision system approach in colour measurements of foods: Part II. validation of methodology with real foods

    Directory of Open Access Journals (Sweden)

    Fatih TARLAK

    2016-01-01

    Full Text Available Abstract The colour of food is one of the most important factors affecting consumers’ purchasing decision. Although there are many colour spaces, the most widely used colour space in the food industry is L*a*b* colour space. Conventionally, the colour of foods is analysed with a colorimeter that measures small and non-representative areas of the food and the measurements usually vary depending on the point where the measurement is taken. This leads to the development of alternative colour analysis techniques. In this work, a simple and alternative method to measure the colour of foods known as “computer vision system” is presented and justified. With the aid of the computer vision system, foods that are homogenous and uniform in colour and shape could be classified with regard to their colours in a fast, inexpensive and simple way. This system could also be used to distinguish the defectives from the non-defectives. Quality parameters of meat and dairy products could be monitored without any physical contact, which causes contamination during sampling.

  8. Methodology for using prompt gamma activation analysis to measure the binary diffusion coefficient of a gas in a porous medium

    International Nuclear Information System (INIS)

    Rios Perez, Carlos A.; Biegalski, Steve R.; Deinert, Mark R.

    2012-01-01

    Highlights: ► Prompt gamma activation analysis is used to study gas diffusion in a porous system. ► Diffusion coefficients are determined using prompt gamma activation analysis. ► Predictions concentrations fit experimental measurements with an R 2 of 0.98. - Abstract: Diffusion plays a critical role in determining the rate at which gases migrate through porous systems. Accurate estimates of diffusion coefficients are essential if gas transport is to be accurately modeled and better techniques are needed that can be used to measure these coefficients non-invasively. Here we present a novel method for using prompt gamma activation analysis to determine the binary diffusion coefficients of a gas in a porous system. Argon diffusion experiments were conducted in a 1 m long, 10 cm diameter, horizontal column packed with a SiO 2 sand. The temporal variation of argon concentration within the system was measured using prompt gamma activation analysis. The binary diffusion coefficient was obtained by comparing the experimental data with the predictions from a numerical model in which the diffusion coefficient was varied until the sum of square errors between experiment and model data was minimized. Predictions of argon concentration using the optimal diffusivity fit experimental measurements with an R 2 of 0.983.

  9. Evaluation of alternative approaches for measuring n-octanol/water partition coefficients for methodologically challenging chemicals (MCCs)

    Science.gov (United States)

    Measurements of n-octanol/water partition coefficients (KOW) for highly hydrophobic chemicals, i.e., greater than 108, are extremely difficult and are rarely made, in part because the vanishingly small concentrations in the water phase require extraordinary analytical sensitivity...

  10. Exhaled nitric oxide measurements in the first 2 years of life: Methodological issues, clinical and epidemiological applications

    NARCIS (Netherlands)

    C. Gabriele (Carmelo); F.M. de Benedictis (Fernando Maria); J.C. de Jongste (Johan)

    2009-01-01

    textabstractFractional exhaled nitric oxide (FeNO) is a useful tool to diagnose and monitor eosinophilic bronchial inflammation in asthmatic children and adults. In children younger than 2 years of age FeNO has been successfully measured both with the tidal breathing and with the single breath

  11. Measurement of fat in the ovine milk: comparative study of the results acquired by official methodology and by electr onic equipments

    Directory of Open Access Journals (Sweden)

    Luiz Gustavo de Pellegrini

    2016-06-01

    Full Text Available The aim of this work was to perform a comparative study between the official method recommended by Brazilian laws and the electronic equipment of photometric measurement and ultrasound spectroscopy equipment for the quantification of total lipids of the ovine milk in order to check which equipment establishes the lipids level better. The experiment took place at Technology of Food Department in Santa Maria Federal University together with School Plant of Dairy products and Ovine culture Section of Zoo Technical Department. It was used twelve sheeps half Lacaune Lait blood, milked individually from the first to the tenth week of lactation. The milking was performed manually and the analyses took place after the refrigeration of the samples. Before executing the analyses, the samples were homogenized and soon after evaluated in terms of fat amount by three distinct methodologies: official Brazilian methodology through Gerber’s butyrometer (GB, electronic equipment of photometric measurement Milko-Tester® (MT and ultrasound spectroscopic equipment Lactoscan 90® (LS, which all analyses were performed in triple. The reproducibility of LS equipment was 100% for the analyzed samples, while MT equipment showed variability in most of the analyzed samples obtaining reproducibility of the results in just 22,5% of the samples. For the others samples the latter equipment obtained 50% of overrated values and 27,5% underrated values. Therefore, the results of this study let us to settle that the analysis of ovine Milk by ultrasound spectroscopy is efficient for the fat parameter when compared to the official Brazilian methodology.

  12. Vitamin D measurement in the intensive care unit: methodology, clinical relevance and interpretation of a random value.

    Science.gov (United States)

    Krishnan, Anand; Venkatesh, Bala

    2013-08-01

    Vitamin D deficiency, as measured by a random level of 25-hydroxyvitamin D is very prevalent in critically ill patients admitted to the ICU and is associated with adverse outcomes. Both 25(OH)vitamin D and 1α,25(OH)2D3 are difficult to analyse because of their lipophilic nature, affinity for VDBP and small concentrations. Also, the various tests used to estimate vitamin D levels show significant inter- and intra-assay variability, which significantly affect the veracity of the results obtained and confound their interpretation. The two main types of assays include those that directly estimate vitamin D levels (HPLC, LC-MS/MS) and competitive binding assays (RIA, EIA). The former methods require skilled operators, with prolonged assay times and increased cost, whereas the latter are cheaper and easy to perform, but with decreased accuracy. The direct assays are not affected by lipophilic substances in plasma and heterophile antibodies, but may overestimate vitamin D levels by measuring the 3-epimers. These problems can be eliminated by adequate standardization of the test using SRMs provided by NIST, as well as participating in proficiency schemes like DEQAS. It is therefore important to consider the test employed as well as laboratory quality control, while interpreting vitamin D results. A single random measurement may not be reflective of the vitamin D status in ICU patients because of changes with fluid administration, and intra-day variation in 25-hydroxyvitamin D levels. 1α,25(OH)2D3 may behave differently to 25-hydroxyvitamin D, both in plasma and at tissue level, in inflammatory states. Measurement of tissue 1α,25(OH)2D3 levels may provide the true estimate of vitamin D activity.

  13. Measuring the Impact of Globalization on the Well-being of the Poor: Methodology and an Application to Africa

    OpenAIRE

    Rahman, Tauhidur; Mittelhammer, Ronald C.

    2006-01-01

    Whereas a large number of empirical studies have been devoted to analyzing the relationship between measures of income and globalization (defined by openness to international trade), much less attention has been paid to the analysis of well-being for the various subgroups of population and their causal associations with globalization. To address this gap in the literature, this paper first analyzes the quality of life (QOL) of 'poor' and 'non-poor' population segments of 40 African countries ...

  14. Human and Methodological Sources of Variability in the Measurement of Urinary 8-Oxo-7,8-dihydro-2 '-deoxyguanosine

    Czech Academy of Sciences Publication Activity Database

    Barregard, L.; Moller, P.; Henriksen, T.; Mistry, V.; Koppen, G.; Rössner ml., Pavel; Šrám, Radim; Weimann, A.; Poulsen, H. E.; Nataf, R.; Andreoli, R.; Manini, P.; Marczylo, T.; Lam, P.; Evans, M. D.; Kasai, H.; Kawai, K.; Li, Y. S.; Sakai, K.; Sing, R.; Teichert, F.; Farmer, P. B.; Rozalski, R.; Gackowski, D.; Siomek, A.; Saez, G. T.; Cerda, C.; Brogerg, K.; Lindh, C.; Hossain, M. B.; Haghoost, S.; Hu, CH. W.; Chao, M. R.; Wu, K. Y.; Orhan, H.; Senduran, N.; Smith, R. J.; Santella, R. M.; Su, Y.; Cortez, C.; Yeh, S.; Olinski, R.; Loft, S.; Cooke, M. S.

    2013-01-01

    Roč. 18, č. 18 (2013), s. 2377-2391 ISSN 1523-0864 R&D Projects: GA ČR GAP503/11/0084 Grant - others:European Union Sixth Framework Program(XE) FOOD-CT-2005-513943 /ECNIS/ Institutional support: RVO:68378041 Keywords : oxidative stress * biomarkers * 8-oxodG measurements Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 7.667, year: 2013

  15. Towards an ICF-based clinical measure of functioning in people with ankylosing spondylitis: a methodological exploration.

    Science.gov (United States)

    Cieza, A; Hilfiker, R; Boonen, A; van der Heijde, D; Braun, J; Stucki, G

    2009-01-01

    To explore whether it is possible to construct clinical measures of functioning for patients with ankylosing spondylitis (AS) by integrating information obtained across categories of the International Classification of Functioning, Disability and Health (ICF). Sixty-eight ICF categories that were identified as relevant by patients with AS and that covered body functions, structures, and activity and participation were analysed based on the Rasch model for ordered response options. The following properties were studied: unidimensionality, reliability, fit of the ICF categories to the Rasch model, the appropriateness of the order of the response options of the ICF qualifier, and the targeting between the ICF categories and the person's abilities. After accounting for disordered thresholds and misfitting ICF categories, a clinical measure of functioning for AS was proposed that contained 64 ICF categories. On the basis of a transformation table, the raw scores obtained by adding the answers to the 64 ICF categories can be transformed to the Rasch logit scale and to a meaningful interval scale ranging from zero to 100. For the first time, it has been shown that clinical measures of functioning, in principle, can be constructed based on the comprehensive ICF framework covering body functions and structures and activities and participation domains. The results of this investigation are preliminary and must be validated, but they are promising and can contribute to the acceptance and usefulness of the ICF in clinical practice.

  16. Finite bandwidth, nonlinear convective flow in a mushy layer

    Energy Technology Data Exchange (ETDEWEB)

    Riahi, D N, E-mail: daniel.riahi@utrgv.edu [School of Mathematical and Statistical Sciences, University of Texas Rio Grande Valley, One West University Boulevard, Brownsville, TX 78520 (United States)

    2017-04-15

    Finite amplitude convection with a continuous finite bandwidth of modes in a horizontal mushy layer during the solidification of binary alloys is investigated. We analyze the nonlinear convection for values of the Rayleigh number close to its critical value by using multiple scales and perturbation techniques. Applying a combined temporal and spatial evolution approach, we determine a set of three coupled differential equations for the amplitude functions of the convective modes. A large number of new subcritical or supercritical stable solutions to these equations in the form of steady rolls and hexagons with different horizontal length scales are detected. We find, in particular, that depending on the parameter values and on the magnitude and direction of the wave number vectors for the amplitude functions, hexagons with down-flow or up-flow at the cells’ centers or rolls can be stable. Rolls or hexagons with longer horizontal wave length can be stable at higher amplitudes, and there are cases where hexagons are unstable for any value of the Rayleigh number, while rolls are stable only for the values of the Rayleigh number beyond some value. We also detected new stable and irregular flow patterns with two different horizontal scales in the form of superposition of either two sets of hexagons or two sets of inclined rolls. (paper)

  17. Bandwidth-Enhanced Low-Profile Antenna with Parasitic Patches

    Directory of Open Access Journals (Sweden)

    Son Xuat Ta

    2017-01-01

    Full Text Available This paper presents low-profile broadband antennas, which are composed of four parasitic patches placed between planar radiators and a perfect electric conductor ground plane. Two types of planar radiators, a conventional dipole and a crossed dipole, are employed to produce linearly polarized (LP and circularly polarized (CP radiations, respectively. The radiator and parasitic patches are realized on thin substrates to lower the cost. Owing to the presence of parasitic patches, the antenna performance improves in terms of profile reduction, resonant frequency decrease, and bandwidth enhancement. These improvements are discussed and confirmed computationally and experimentally. The LP design with the overall dimensions of 120 mm × 120 mm × 16.3 mm (0.64λ0 × 0.64λ0 × 0.087λ0 at 1.6 GHz has a |S11|  96%. The CP design, which has the same physical size as the LP case, has a |S11|  90%.

  18. New methodology for simultaneous volumetric and calorimetric measurements: Direct determination of {alpha}{sub p} and C{sub p} for liquids under pressure

    Energy Technology Data Exchange (ETDEWEB)

    Casas, L. M. [Departamento de Fisica Aplicada, Facultad de Ciencias Experimentales, Universidad de Vigo, Lagoas Marcosende s/n, 36310 Vigo (Spain); Plantier, F.; Bessieres, D. [Laboratoire de Thermodynamique et Energetique des Fluides Complexes-UMR 5150, Universite de Pau et des Pays de l' Adour, BP 1155, 64013 Pau (France)

    2009-12-15

    A new batch cell has been developed to measure simultaneously both isobaric thermal expansion and isobaric heat capacity from calorimetric measurements. The isobaric thermal expansion is directly proportional to the linear displacement of an inner flexible below and the heat capacity is calculated from the calorimetric signal. The apparatus used was a commercial Setaram C-80 calorimeter and together with this type of vessels can be operated up to 20 MPa and in the temperature range of 303.15-523.15 K, In this work, calibration was carried out using 1-hexanol and subsequently both thermophysical properties were determined for 3-pentanol, 3-ethyl-3-pentanol, and 1-octanol at atmospheric pressure, 5 and 10 MPa, and from 303.15 to 423.15 K in temperature. Finally experimental values were compared with the literature in order to validate this new methodology, which allows a very accurate determination of isobaric thermal expansion and isobaric heat capacity.

  19. Measuring coral calcification under ocean acidification: methodological considerations for the 45Ca-uptake and total alkalinity anomaly technique

    Directory of Open Access Journals (Sweden)

    Stephanie Cohen

    2017-09-01

    Full Text Available As the oceans become less alkaline due to rising CO2 levels, deleterious consequences are expected for calcifying corals. Predicting how coral calcification will be affected by on-going ocean acidification (OA requires an accurate assessment of CaCO3 deposition and an understanding of the relative importance that decreasing calcification and/or increasing dissolution play for the overall calcification budget of individual corals. Here, we assessed the compatibility of the 45Ca-uptake and total alkalinity (TA anomaly techniques as measures of gross and net calcification (GC, NC, respectively, to determine coral calcification at pHT 8.1 and 7.5. Considering the differing buffering capacity of seawater at both pH values, we were also interested in how strongly coral calcification alters the seawater carbonate chemistry under prolonged incubation in sealed chambers, potentially interfering with physiological functioning. Our data indicate that NC estimates by TA are erroneously ∼5% and ∼21% higher than GC estimates from 45Ca for ambient and reduced pH, respectively. Considering also previous data, we show that the consistent discrepancy between both techniques across studies is not constant, but largely depends on the absolute value of CaCO3 deposition. Deriving rates of coral dissolution from the difference between NC and GC was not possible and we advocate a more direct approach for the future by simultaneously measuring skeletal calcium influx and efflux. Substantial changes in carbonate system parameters for incubation times beyond two hours in our experiment demonstrate the necessity to test and optimize experimental incubation setups when measuring coral calcification in closed systems, especially under OA conditions.

  20. The 'CommTech' Methodology: A Demand-Driven Approach to Efficient, Productive and Measurable Technology Transfer

    Science.gov (United States)

    Horsham, Gray A. P.

    1998-01-01

    Market research sources were used to initially gather primary technological problems and needs data from non-aerospace companies in targeted industry sectors. The company-supplied information served as input data to activate or start-up an internal, phased match-making process. This process was based on technical-level relationship exploration followed by business-level agreement negotiations, and culminated with project management and execution. Space Act Agreements represented near-term outputs. Company product or process commercialization derived from Lewis support and measurable economic effects represented far-term outputs.

  1. 3-D Printed Fabry–Pérot Resonator Antenna with Paraboloid-Shape Superstrate for Wide Gain Bandwidth

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    2017-11-01

    Full Text Available A three-dimensional (3-D printed Fabry–Pérot resonator antenna (FPRA, which designed with a paraboloid-shape superstrate for wide gain bandwidth is proposed. In comparison with the commonly-adopted planar superstrate, the paraboloid-shape superstrate is able to provide multiple resonant heights and thus satisfy the resonant condition of the FPRA in a wide frequency band. A FPRA working at 6 GHz is designed, fabricated, and tested. Considering the fabrication difficulty caused by its complex structure, the prototype antenna was fabricated by using the 3-D printing technology, i.e., all components of the prototype antenna were printed with photopolymer resin and then treated by the surface metallization process. Measurement results agree well with the simulation results, and show the 3-D printed FPRA has a |S11| < −10 dB impedance bandwidth of 12.4%, and a gain of 16.8 dBi at its working frequency of 6 GHz. Moreover, in comparison with the planar superstrate adopted in traditional FPRAs, the paraboloid-shape superstrate of the proposed FPRA significantly improves the 3-dB gain bandwidth from 6% to 22.2%.

  2. Theoretical and Experimental Study of Radial Velocity Generation for Extending Bandwidth of Magnetohydrodynamic Angular Rate Sensor at Low Frequency

    Directory of Open Access Journals (Sweden)

    Yue Ji

    2015-12-01

    Full Text Available The magnetohydrodynamics angular rate sensor (MHD ARS has received much attention for its ultra-low noise in ultra-broad bandwidth and its impact resistance in harsh environments; however, its poor performance at low frequency hinders its work in long time duration. The paper presents a modified MHD ARS combining Coriolis with MHD effect to extend the measurement scope throughout the whole bandwidth, in which an appropriate radial flow velocity should be provided to satisfy simplified model of the modified MHD ARS. A method that can generate radial velocity by an MHD pump in MHD ARS is proposed. A device is designed to study the radial flow velocity generated by the MHD pump. The influence of structure and physical parameters are studied by numerical simulation and experiment of the device. The analytic expression of the velocity generated by the energized current drawn from simulation and experiment are consistent, which demonstrates the effectiveness of the method generating radial velocity. The study can be applied to generate and control radial velocity in modified MHD ARS, which is essential for the two effects combination throughout the whole bandwidth.

  3. Fast sorption measurements of volatile organic compounds on building materials: Part 1 – Methodology developed for field applications

    Directory of Open Access Journals (Sweden)

    M. Rizk

    2016-03-01

    Full Text Available A Proton Transfer Reaction-Mass Spectrometer (PTR-MS has been coupled to the outlet of a Field and Laboratory Emission Cell (FLEC, to measure volatile organic compounds (VOC concentration during a sorption experiments (Rizk et al., this issue [1]. The limits of detection of the PTR-MS for three VOCs are presented for different time resolution (2, 10 and 20 s. The mass transfer coefficient was calculated in the FLEC cavity for the different flow rates. The concentration profile obtained from a sorption experiment performed on a gypsum board and a vinyl flooring are also presented in comparison with the profile obtained for a Pyrex glass used as a material that do not present any sorption behavior (no sink. Finally, the correlation between the concentration of VOCs adsorbed on the surface of the gypsum board at equilibrium (Cse and the concentration of VOCs Ce measured in the gas phase at equilibrium is presented for benzene, C8 aromatics and toluene.

  4. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    Science.gov (United States)

    Egert, Amanda; Klotz, James; McLeod, Kyle; Harmon, David

    2014-10-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 hour period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of rumen contents.

  5. Measuring the accomplishments of public participation programs: Overview of a methodological study performed for DOE's Office of Environmental Management

    International Nuclear Information System (INIS)

    Schweitzer, M.; Carnes, S.A.; Peelle, E.B.; Wolfe, A.K.

    1997-01-01

    Recently, staff at Oak Ridge National Laboratory performed a study for the Office of Intergovernmental and Public Accountability within the U.S. Department of Energy's (DOE) Office of Environmental Management (EM), examining how to measure the success of public participation programs. While the study began with a thorough literature review, the primary emphasis of this research effort was on getting key stakeholders to help identify attributes of successful public participation in EM activities and to suggest how those attributes might be measured. Interviews were conducted at nine DOE sites that provided substantial variety in terms of geographic location, types of environmental management activities undertaken, the current life-cycle stage of those EM efforts, and the public participation mechanisms utilized. Approximately 12 to 15 oral interviews were conducted at each site, and each respondent also was asked to complete a written survey. Those interviewed included: non-regulatory state and local government officials; project managers and public participation staff for DOE and its management and operations contractors; non-government groups concerned with environmental protection, public safety, and health issues; federal and state environmental regulators; business organizations; civic groups; and other interested parties. While this study examined only those public participation programs sponsored by DOE, the resulting findings also have applicability to the public involvement efforts sponsored by many other public and private sector organizations

  6. A high control bandwidth design method for aalborg inverter under weak grid condition

    DEFF Research Database (Denmark)

    Wu, Weimin; Zhou, Cong; Wang, Houqin

    2017-01-01

    Aalborg Inverter is a kind of high efficient Buck-Boost inverter. Since it may work in “Buck-Boost” mode, the control bandwidth should be high enough to ensure a good performance under any grid condition. However, during the “Boost” operation, the control bandwidth depends much on the grid...

  7. A Hybrid OFDM-TDM Architecture with Decentralized Dynamic Bandwidth Allocation for PONs

    Directory of Open Access Journals (Sweden)

    Taner Cevik

    2013-01-01

    Full Text Available One of the major challenges of passive optical networks is to achieve a fair arbitration mechanism that will prevent possible collisions from occurring at the upstream channel when multiple users attempt to access the common fiber at the same time. Therefore, in this study we mainly focus on fair bandwidth allocation among users, and present a hybrid Orthogonal Frequency Division Multiplexed/Time Division Multiplexed architecture with a dynamic bandwidth allocation scheme that provides satisfying service qualities to the users depending on their varying bandwidth requirements. Unnecessary delays in centralized schemes occurring during bandwidth assignment stage are eliminated by utilizing a decentralized approach. Instead of sending bandwidth demands to the optical line terminal (OLT which is the only competent authority, each optical network unit (ONU runs the same bandwidth demand determination algorithm. ONUs inform each other via signaling channel about the status of their queues. This information is fed to the bandwidth determination algorithm which is run by each ONU in a distributed manner. Furthermore, Light Load Penalty, which is a phenomenon in optical communications, is mitigated by limiting the amount of bandwidth that an ONU can demand.

  8. High-Q Variable Bandwidth Passive Filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    2001-01-01

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

  9. High-Q variable bandwidth passive filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

  10. Bandwidth limitations in current mode and voltage mode integrated feedback amplifiers

    DEFF Research Database (Denmark)

    Bruun, Erik

    1995-01-01

    loop bandwidth remains constant for a feedback amplifier. The constant-bandwidth relations of such amplifier designs are reviewed in this paper and they are combined with the constraints imposed by technology when the feedback amplifier is to be designed in an integrated technology. From this analysis...

  11. Improvement of the bandwidth of the transient digitizers in the LIDAR Thomson scattering diagnostic on JET

    International Nuclear Information System (INIS)

    Kristensen, E.

    1990-06-01

    The main limitation on the spatial resolution of the LIDAR Thomson scattering diagnostic on the JET tokamak is due to the narrow bandwidth of the detection system. The transient digitizers, Tektronik 7912AD, are the main contributors to the narrow bandwidth. It is shown how the digitizers can be modified to improve the response time from approx. 480 to 410 ps. (author)

  12. A Hybrid OFDM-TDM Architecture with Decentralized Dynamic Bandwidth Allocation for PONs

    Science.gov (United States)

    Cevik, Taner

    2013-01-01

    One of the major challenges of passive optical networks is to achieve a fair arbitration mechanism that will prevent possible collisions from occurring at the upstream channel when multiple users attempt to access the common fiber at the same time. Therefore, in this study we mainly focus on fair bandwidth allocation among users, and present a hybrid Orthogonal Frequency Division Multiplexed/Time Division Multiplexed architecture with a dynamic bandwidth allocation scheme that provides satisfying service qualities to the users depending on their varying bandwidth requirements. Unnecessary delays in centralized schemes occurring during bandwidth assignment stage are eliminated by utilizing a decentralized approach. Instead of sending bandwidth demands to the optical line terminal (OLT) which is the only competent authority, each optical network unit (ONU) runs the same bandwidth demand determination algorithm. ONUs inform each other via signaling channel about the status of their queues. This information is fed to the bandwidth determination algorithm which is run by each ONU in a distributed manner. Furthermore, Light Load Penalty, which is a phenomenon in optical communications, is mitigated by limiting the amount of bandwidth that an ONU can demand. PMID:24194684

  13. The Commtech Methodology: A Demand-Driven Approach to Efficient, Productive, and Measurable Technology Transfer and Commercialization

    Science.gov (United States)

    Horsham, Gary A. P.

    1999-01-01

    This paper presents a comprehensive review and assessment of a demonstration technology transfer and commercialization prouram called "CommTech". The pro-ram was conceived and initiated in early to mid-fiscal year 1995, and extended roughly three years into the future. Market research sources were used to initially gather primary technological problems and needs data from non-aerospace companies in three targeted industry sectors: environmental, surface transportation, and bioengineering. Company-supplied information served as input data to activate or start-up an internal, phased matchmaking process. This process was based on technical-level relationship exploration followed by business-level agreement negotiations. and culminated with project management and execution. Space Act Agreements represented near-term outputs. Company product or process commercialization derived from NASA Glenn support and measurable economic effects represented far-term outputs.

  14. Development of an accurate pH measurement methodology for the pore fluids of low pH cementitious materials

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, M. C.; Garcia Calvo, J. L. [The Spanish National Research Council (CSIC), Madrid (Spain); Walker, C. [Japan Atomic Energy Agency (JAEA), Ibaraki (Japan)] [and others

    2012-08-15

    The main objective of this project has been the development of an agreed set of protocols for the pH measurement of the pore fluid of a low pH cementitious material. Three protocols have been developed (Chapter 2), a reference method, based on pore fluid expression (PFE), and two routine methods with and without filtering, based on Ex Situ Leaching (ESL) procedures. Templates have been designed on which to record details of the pH measurement for the reference (PFE) method (Appendix C) and the routine (ESL) methods without and with filtering (Appendix D). Preliminary protocols were based on a broad review of the literature (Appendix A) and refined through a series of test experiments of the more critical parameters (Appendix B). After definition of the preliminary protocols, two phases of interlaboratory tests were performed. The first phase (Chapter 3) used the same low pH cement paste and enabled the nine participating laboratories to use, become familiar with and to identify any problems/uncertainties in the preliminary protocols. The reported pH values were subjected to a statistical analysis of the (within laboratory) repeatability and (between-laboratory) reproducibility and so provided a reliability test of the preliminary protocols. The second phase (Chapter 4) of interlaboratory tests used four different candidate low pH cementitious materials in the same nine laboratories, which allowed testing, validation and comparison of the reported pH values, which were obtained using the final protocols for the reference (PFE) and routine (ESL) methods by statistical analysis. The proposed final protocols (Chapter 2) have resulted in the reported pH values having low deviation and high reproducibility and repeatability. This will allow confidence in the pH value when selecting a candidate low pH cementitious material to be used in the engineered component of a high-level nuclear waste repository.

  15. Development of an accurate pH measurement methodology for the pore fluids of low pH cementitious materials

    International Nuclear Information System (INIS)

    Alonso, M. C.; Garcia Calvo, J. L.; Walker, C.

    2012-08-01

    The main objective of this project has been the development of an agreed set of protocols for the pH measurement of the pore fluid of a low pH cementitious material. Three protocols have been developed (Chapter 2), a reference method, based on pore fluid expression (PFE), and two routine methods with and without filtering, based on Ex Situ Leaching (ESL) procedures. Templates have been designed on which to record details of the pH measurement for the reference (PFE) method (Appendix C) and the routine (ESL) methods without and with filtering (Appendix D). Preliminary protocols were based on a broad review of the literature (Appendix A) and refined through a series of test experiments of the more critical parameters (Appendix B). After definition of the preliminary protocols, two phases of interlaboratory tests were performed. The first phase (Chapter 3) used the same low pH cement paste and enabled the nine participating laboratories to use, become familiar with and to identify any problems/uncertainties in the preliminary protocols. The reported pH values were subjected to a statistical analysis of the (within laboratory) repeatability and (between-laboratory) reproducibility and so provided a reliability test of the preliminary protocols. The second phase (Chapter 4) of interlaboratory tests used four different candidate low pH cementitious materials in the same nine laboratories, which allowed testing, validation and comparison of the reported pH values, which were obtained using the final protocols for the reference (PFE) and routine (ESL) methods by statistical analysis. The proposed final protocols (Chapter 2) have resulted in the reported pH values having low deviation and high reproducibility and repeatability. This will allow confidence in the pH value when selecting a candidate low pH cementitious material to be used in the engineered component of a high-level nuclear waste repository

  16. Methodological NMR imaging developments to measure cerebral perfusion; Developpements methodologiques en IRM pour la mesure de perfusion cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Pannetier, N.

    2010-12-15

    This work focuses on acquisition techniques and physiological models that allow characterization of cerebral perfusion by MRI. The arterial input function (AIF), on which many models are based, is measured by a technique of optical imaging at the carotid artery in rats. The reproducibility and repeatability of the AIF are discussed and a model function is proposed. Then we compare two techniques for measuring the vessel size index (VSI) in rats bearing a glioma. The reference technique, using a USPIO contrast agent (CA), faces the dynamic approach that estimates this parameter during the passage of a bolus of Gd. This last technique has the advantage of being used clinically. The results obtained at 4.7 T by both approaches are similar and use of VSI in clinical protocols is strongly encouraged at high field. The mechanisms involved (R1 and R2* relaxivities) were then studied using a multi gradient -echoes approach. A multi-echoes spiral sequence is developed and a method that allows the refocusing between each echo is presented. This sequence is used to characterize the impact of R1 effects during the passage of two successive injections of Gd. Finally, we developed a tool for simulating the NMR signal on a 2D geometry taking into account the permeability of the BBB and the CA diffusion in the interstitial space. At short TE, the effect of diffusion on the signal is negligible. In contrast, the effects of diffusion and permeability may be separated at long echo time. Finally we show that during the extravasation of the CA, the local magnetic field homogenization due to the decrease of the magnetic susceptibility difference at vascular interfaces is quickly balanced by the perturbations induced by the increase of the magnetic susceptibility difference at the cellular interfaces in the extravascular compartment. (author)

  17. Reliability and Validity of Digital Imagery Methodology for Measuring Starting Portions and Plate Waste from School Salad Bars.

    Science.gov (United States)

    Bean, Melanie K; Raynor, Hollie A; Thornton, Laura M; Sova, Alexandra; Dunne Stewart, Mary; Mazzeo, Suzanne E

    2018-04-12

    Scientifically sound methods for investigating dietary consumption patterns from self-serve salad bars are needed to inform school policies and programs. To examine the reliability and validity of digital imagery for determining starting portions and plate waste of self-serve salad bar vegetables (which have variable starting portions) compared with manual weights. In a laboratory setting, 30 mock salads with 73 vegetables were made, and consumption was simulated. Each component (initial and removed portion) was weighed; photographs of weighed reference portions and pre- and post-consumption mock salads were taken. Seven trained independent raters visually assessed images to estimate starting portions to the nearest ¼ cup and percentage consumed in 20% increments. These values were converted to grams for comparison with weighed values. Intraclass correlations between weighed and digital imagery-assessed portions and plate waste were used to assess interrater reliability and validity. Pearson's correlations between weights and digital imagery assessments were also examined. Paired samples t tests were used to evaluate mean differences (in grams) between digital imagery-assessed portions and measured weights. Interrater reliabilities were excellent for starting portions and plate waste with digital imagery. For accuracy, intraclass correlations were moderate, with lower accuracy for determining starting portions of leafy greens compared with other vegetables. However, accuracy of digital imagery-assessed plate waste was excellent. Digital imagery assessments were not significantly different from measured weights for estimating overall vegetable starting portions or waste; however, digital imagery assessments slightly underestimated starting portions (by 3.5 g) and waste (by 2.1 g) of leafy greens. This investigation provides preliminary support for use of digital imagery in estimating starting portions and plate waste from school salad bars. Results might inform

  18. Cognitive interviewing methodology in the development of a pediatric item bank: a patient reported outcomes measurement information system (PROMIS study

    Directory of Open Access Journals (Sweden)

    DeWalt Darren A

    2009-01-01

    Full Text Available Abstract Background The evaluation of patient-reported outcomes (PROs in health care has seen greater use in recent years, and methods to improve the reliability and validity of PRO instruments are advancing. This paper discusses the cognitive interviewing procedures employed by the Patient Reported Outcomes Measurement Information System (PROMIS pediatrics group for the purpose of developing a dynamic, electronic item bank for field testing with children and adolescents using novel computer technology. The primary objective of this study was to conduct cognitive interviews with children and adolescents to gain feedback on items measuring physical functioning, emotional health, social health, fatigue, pain, and asthma-specific symptoms. Methods A total of 88 cognitive interviews were conducted with 77 children and adolescents across two sites on 318 items. From this initial item bank, 25 items were deleted and 35 were revised and underwent a second round of cognitive interviews. A total of 293 items were retained for field testing. Results Children as young as 8 years of age were able to comprehend the majority of items, response options, directions, recall period, and identify problems with language that was difficult for them to understand. Cognitive interviews indicated issues with item comprehension on several items which led to alternative wording for these items. Conclusion Children ages 8–17 years were able to comprehend most item stems and response options in the present study. Field testing with the resulting items and response options is presently being conducted as part of the PROMIS Pediatric Item Bank development process.

  19. Simultaneous measurement of milk intake and total energy expenditure in mixed-fed infants: Methodological approach and prediction of total body water

    International Nuclear Information System (INIS)

    Wells, J.C.K.; Davies, P.S.W.; Coward, W.A.

    2000-01-01

    Evaluation of the energy metabolism that underlies the new WHO breast-fed growth reference requires simultaneous measurements of milk volume intake (MVI) and total energy expenditure (TEE) by stable isotope methodologies. In young infants, such data is collected without difficulty using the dose-to-the-infant method. In older infants, where breast-milk is supplemented with non-milk foods, MVI must be measured by dosing the mother instead of the infant. This procedure would interfere with a simple measurement of infant TEE using the standard dose-to-the-infant method. Theoretically, this difficulty can be resolved by dosing the mother with deuterium and the infant with 18-oxygen, and using curve-peeling methods to calculate the infant deuterium kinetics. We propose to ascertain whether such an approach is viable in practice, such that MVI, TEE and body composition could all be measured simultaneously in mixed-fed infants. Where MVI in older infants is measured on its own, there is a need to predict infant body water in order to estimate the deuterium dilution space. Using a database of 234 infants aged 1.5 to 12 months, we provide new predictive equations by which such values may be obtained. (author)

  20. On methodology

    DEFF Research Database (Denmark)

    Cheesman, Robin; Faraone, Roque

    2002-01-01

    This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública".......This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública"....