WorldWideScience

Sample records for model utilized measures

  1. A framework to utilize turbulent flux measurements for mesoscale models and remote sensing applications

    Directory of Open Access Journals (Sweden)

    W. Babel

    2011-05-01

    Full Text Available Meteorologically measured fluxes of energy and matter between the surface and the atmosphere originate from a source area of certain extent, located in the upwind sector of the device. The spatial representativeness of such measurements is strongly influenced by the heterogeneity of the landscape. The footprint concept is capable of linking observed data with spatial heterogeneity. This study aims at upscaling eddy covariance derived fluxes to a grid size of 1 km edge length, which is typical for mesoscale models or low resolution remote sensing data.

    Here an upscaling strategy is presented, utilizing footprint modelling and SVAT modelling as well as observations from a target land-use area. The general idea of this scheme is to model fluxes from adjacent land-use types and combine them with the measured flux data to yield a grid representative flux according to the land-use distribution within the grid cell. The performance of the upscaling routine is evaluated with real datasets, which are considered to be land-use specific fluxes in a grid cell. The measurements above rye and maize fields stem from the LITFASS experiment 2003 in Lindenberg, Germany and the respective modelled timeseries were derived by the SVAT model SEWAB. Contributions from each land-use type to the observations are estimated using a forward lagrangian stochastic model. A representation error is defined as the error in flux estimates made when accepting the measurements unchanged as grid representative flux and ignoring flux contributions from other land-use types within the respective grid cell.

    Results show that this representation error can be reduced up to 56 % when applying the spatial integration. This shows the potential for further application of this strategy, although the absolute differences between flux observations from rye and maize were so small, that the spatial integration would be rejected in a real situation. Corresponding thresholds for

  2. Evidence evaluation: measure Z corresponds to human utility judgments better than measure L and optimal-experimental-design models.

    Science.gov (United States)

    Rusconi, Patrice; Marelli, Marco; D'Addario, Marco; Russo, Selena; Cherubini, Paolo

    2014-05-01

    Evidence evaluation is a crucial process in many human activities, spanning from medical diagnosis to impression formation. The present experiments investigated which, if any, normative model best conforms to people's intuition about the value of the obtained evidence. Psychologists, epistemologists, and philosophers of science have proposed several models to account for people's intuition about the utility of the obtained evidence with respect either to a focal hypothesis or to a constellation of hypotheses. We pitted against each other the so-called optimal-experimental-design models (i.e., Bayesian diagnosticity, log₁₀ diagnosticity, information gain, Kullback-Leibler distance, probability gain, and impact) and measures L and Z to compare their ability to describe humans' intuition about the value of the obtained evidence. Participants received words-and-numbers scenarios concerning 2 hypotheses and binary features. They were asked to evaluate the utility of "yes" and "no" answers to questions about some features possessed in different proportions (i.e., the likelihoods) by 2 types of extraterrestrial creatures (corresponding to 2 mutually exclusive and exhaustive hypotheses). Participants evaluated either how an answer was helpful or how an answer decreased/increased their beliefs with respect either to a single hypothesis or to both hypotheses. We fitted mixed-effects models and used the Akaike information criterion and the Bayesian information criterion values to compare the competing models of the value of the obtained evidence. Overall, the experiments showed that measure Z was the best fitting model of participants' judgments of the value of obtained answers. We discussed the implications for the human hypothesis-evaluation process.

  3. Modeling strategy to identify patients with primary immunodeficiency utilizing risk management and outcome measurement.

    Science.gov (United States)

    Modell, Vicki; Quinn, Jessica; Ginsberg, Grant; Gladue, Ron; Orange, Jordan; Modell, Fred

    2017-06-01

    This study seeks to generate analytic insights into risk management and probability of an identifiable primary immunodeficiency defect. The Jeffrey Modell Centers Network database, Jeffrey Modell Foundation's 10 Warning Signs, the 4 Stages of Testing Algorithm, physician-reported clinical outcomes, programs of physician education and public awareness, the SPIRIT® Analyzer, and newborn screening, taken together, generates P values of less than 0.05%. This indicates that the data results do not occur by chance, and that there is a better than 95% probability that the data are valid. The objectives are to improve patients' quality of life, while generating significant reduction of costs. The advances of the world's experts aligned with these JMF programs can generate analytic insights as to risk management and probability of an identifiable primary immunodeficiency defect. This strategy reduces the uncertainties related to primary immunodeficiency risks, as we can screen, test, identify, and treat undiagnosed patients. We can also address regional differences and prevalence, age, gender, treatment modalities, and sites of care, as well as economic benefits. These tools support high net benefits, substantial financial savings, and significant reduction of costs. All stakeholders, including patients, clinicians, pharmaceutical companies, third party payers, and government healthcare agencies, must address the earliest possible precise diagnosis, appropriate intervention and treatment, as well as stringent control of healthcare costs through risk assessment and outcome measurement. An affected patient is entitled to nothing less, and stakeholders are responsible to utilize tools currently available. Implementation offers a significant challenge to the entire primary immunodeficiency community.

  4. Lagrangian Particle Dispersion Model Intercomparison and Evaluation Utilizing Measurements from Controlled Tracer Release Experiments

    Science.gov (United States)

    Hegarty, J. D.; Draxler, R.; Stein, A. F.; Brioude, J.; Eluszkiewicz, J.; Mountain, M.; Nehrkorn, T.; Andrews, A. E.

    2012-12-01

    The accuracy of greenhouse gas (GHG) fluxes estimated using inverse methods is highly dependent on the fidelity of the atmospheric transport model employed. Lagrangian particle dispersion models (LPDMs) driven by customized meteorological output from mesoscale models have emerged as a powerful tool in inverse GHG estimates at policy-relevant regional and urban scales, for several reasons: 1) Mesoscale meteorology can be available at higher resolution than in most global models, and therefore has the potential to be more realistic, 2) the Lagrangian approach minimizes numerical diffusion present in Eulerian models and is thus better able to represent transport in the near-field of measurement locations, and 3) the Lagrangian approach offers an efficient way to compute the grid-scale adjoint of the transport model ("footprints") by running transport backwards in time. Motivated by these considerations, we intercompare three widely used LPDMs (HYSPLIT, STILT, and FLEXPART) driven by identical meteorological input from the Weather Research and Forecasting (WRF) model against measurements from the controlled tracer release experiments (ready-testbed.arl.noaa.gov/HYSPLIT_datem.php). Our analysis includes statistical assessments of each LPDM in terms of its ability to simulate the observed tracer concentrations, reversibility, and sensitivity to the WRF configuration, particularly with regard to the simulation of the planetary boundary layer.

  5. Utilizing patch and site level greenhouse-gas concentration measurements in tandem with the prognostic model, ecosys

    Science.gov (United States)

    Morin, T. H.; Rey Sanchez, C.; Bohrer, G.; Riley, W. J.; Angle, J.; Mekonnen, Z. A.; Stefanik, K. C.; Wrighton, K. C.

    2016-12-01

    Estimates of wetland greenhouse gas (GHG) budgets currently have large uncertainties. While wetlands are the largest source of natural methane (CH4) emissions worldwide, they are also important carbon dioxide (CO2) sinks. Determining the GHG budget of a wetland is challenging, particularly because wetlands have intrinsically temporally and spatially heterogeneous land cover patterns and complex dynamics of CH4 production and emissions. These issues pose challenges to both measuring and modeling GHG budgets from wetlands. To improve wetland GHG flux predictability, we utilized the ecosys model to predict CH4 fluxes from a natural temperate estuarine wetland in northern Ohio. Multiple patches of terrain (that included Typha spp. and Nelumbo lutea) were represented as separate grid cells in the model. Cells were initialized with measured values but were allowed to dynamically evolve in response to meteorological, hydrological, and thermodynamic conditions. Trace gas surface emissions were predicted as the end result of microbial activity, physical transport, and plant processes. Corresponding to each model gridcell, measurements of dissolved gas concentrations were conducted with pore-water dialysis samplers (peepers). The peeper measurements were taken via a series of tubes, providing an undisturbed observation of the pore water concentrations of in situ dissolved gases along a vertical gradient. Non-steady state chambers and a flux tower provided both patch level and integrated site-level fluxes of CO2 and CH4. New Typha chambers were also developed to enclose entire plants and segregate the plant fluxes from soil/water fluxes. We expect ecosys to predict the seasonal and diurnal fluxes of CH4 from within each land cover type and to resolve where CH4 is generated within the soil column and its transmission mechanisms. We demonstrate the need for detailed information at both the patch and site level when using models to predict whole wetland ecosystem-scale GHG

  6. Making Meaningful Measurement in Survey Research: A Demonstration of the Utility of the Rasch Model. IR Applications. Volume 28

    Science.gov (United States)

    Royal, Kenneth D.

    2010-01-01

    Quality measurement is essential in every form of research, including institutional research and assessment. This paper addresses the erroneous assumptions institutional researchers often make with regard to survey research and provides an alternative method to producing more valid and reliable measures. Rasch measurement models are discussed and…

  7. Maximizing the utility of radio spectrum: Broadband spectrum measurements and occupancy model for use by cognitive radio

    Science.gov (United States)

    Petrin, Allen J.

    Radio spectrum is a vital national asset; proper management of this finite resource is essential to the operation and development of telecommunications, radio-navigation, radio astronomy, and passive remote sensing services. To maximize the utility of the radio spectrum, knowledge of its current usage is beneficial. As a result, several spectrum studies have been conducted in urban Atlanta, suburban Atlanta, and rural North Carolina. These studies improve upon past spectrum studies by resolving spectrum usage by nearly all its possible parameters: frequency, time, polarization, azimuth, and location type. The continuous frequency range from 400MHz to 7.2 GHz was measured with a custom-designed system. More than 8 billion spectrum measurements were taken over several months of observation. A multi-parameter spectrum usage detection method was developed and analyzed with data from the spectrum studies. This method was designed to exploit all the characteristics of spectral information that was available from the spectrum studies. Analysis of the spectrum studies showed significant levels of underuse. The level of spectrum usage in time and azimuthal space was determined to be only 6.5 % for the urban Atlanta, 5.3 % for suburban Atlanta, and 0.8 % for the rural North Carolina spectrum studies. Most of the frequencies measured never experienced usage. Interference was detected in several protected radio astronomy and sensitive radio navigation bands. A cognitive radio network architecture to share spectrum with fixed microwave systems was developed. The architecture uses a broker-based sharing method to control spectrum access and investigate interference issues.

  8. The Service Utility Model in Service Management

    Institute of Scientific and Technical Information of China (English)

    LI Yan; ZHOU Wen-an; SONG Jun-de

    2005-01-01

    Aiming to provide a measurable service Quality of Service (QoS) evaluating method for service inventory management, this paper proposes a new mobile Service Utility Model (SUM), considers the service and business layer elements into the service utility influence profile, and proposes an self-adaptive service inventory management algorithm as a QoS control scheme based on SUM. It can be concluded from the simulation result that the service inventory utility can be fully reflected by SUM and the whole system efficiency is greatly increased by using SUM as the adaptive rule.

  9. Clinical utility of measures of breathlessness.

    Science.gov (United States)

    Cullen, Deborah L; Rodak, Bernadette

    2002-09-01

    The clinical utility of measures of dyspnea has been debated in the health care community. Although breathlessness can be evaluated with various instruments, the most effective dyspnea measurement tool for patients with chronic lung disease or for measuring treatment effectiveness remains uncertain. Understanding the evidence for the validity and reliability of these instruments may provide a basis for appropriate clinical application. Evaluate instruments designed to measure breathlessness, either as single-symptom or multidimensional instruments, based on psychometrics foundations such as validity, reliability, and discriminative and evaluative properties. Classification of each dyspnea measurement instrument will recommend clinical application in terms of exercise, benchmarking patients, activities of daily living, patient outcomes, clinical trials, and responsiveness to treatment. Eleven dyspnea measurement instruments were selected. Each instrument was assessed as discriminative or evaluative and then analyzed as to its psychometric properties and purpose of design. Descriptive data from all studies were described according to their primary patient application (ie, chronic obstructive pulmonary disease, asthma, or other patient populations). The Borg Scale and the Visual Analogue Scale are applicable to exertion and thus can be applied to any cardiopulmonary patient to determine dyspnea. All other measures were determined appropriate for chronic obstructive pulmonary disease, whereas the Shortness of Breath Questionnaire can be applied to cystic fibrosis and lung transplant patients. The most appropriate utility for all instruments was measuring the effects on activities of daily living and for benchmarking patient progress. Instruments that quantify function and health-related quality of life have great utility for documenting outcomes but may be limited as to documenting treatment responsiveness in terms of clinically important changes. The dyspnea

  10. Erosive Burning Study Utilizing Ultrasonic Measurement Techniques

    Science.gov (United States)

    Furfaro, James A.

    2003-01-01

    A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.

  11. Measuring equity in disability and healthcare utilization in Afghanistan.

    Science.gov (United States)

    Trani, Jean-Francois; Barbou-des-Courieres, Cecile

    2012-01-01

    This paper analyses equity in health and healthcare utilization in Afghanistan based on a representative national household survey. Equitable access is a cornerstone of the Afghan health policy. We measured socioeconomic-related equity in access to public health care, using disability--because people with disabilities are poorer and more likely to use health care--and a concentration index (CI) and its decomposition. The socioeconomic-related equity in healthcare utilization was measured using a probit model and compared with an OLS model providing the horizontal inequity index (HI). We found a low rate of healthcare facilities utilization (25%). Disabled persons are using more healthcare facilities and have higher medical expenses. Disability is more frequently associated with older age, unemployed heads of household and lower education. The Cl of disability is 0.0221 indicating a pro-rich distribution of health. This pro-rich effect is higher in small households (CI decreases with size of the household, -0.0048) and safe (0.0059) areas. The CI of healthcare utilization is -0.0159 indicating a slightly pro-poor distribution of healthcare utilization but, overall, there is no difference in healthcare utilization by wealth status. Our study does not show major socioeconomic related inequity in disability and healthcare utilization in Afghanistan. This is due to the extreme and pervasive poverty found in Afghanistan. The absence of inequity in health access is explained by the uniform poverty of the population and the difficulty of accessing BPHS facilities (a basic package of health services), despite alarming health indicators.

  12. Deriving minimal models for resource utilization

    NARCIS (Netherlands)

    te Brinke, Steven; Bockisch, Christoph; Bergmans, Lodewijk; Malakuti Khah Olun Abadi, Somayeh; Aksit, Mehmet; Katz, Shmuel

    2013-01-01

    We show how compact Resource Utilization Models (RUMs) can be extracted from concrete overly-detailed models of systems or sub-systems in order to model energy-aware software. Using the Counterexample-Guided Abstraction Refinement (CEGAR) approach, along with model-checking tools, abstract models

  13. Utilizing computer models for optimizing classroom acoustics

    Science.gov (United States)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  14. The predictive validity of prospect theory versus expected utility in health utility measurement.

    Science.gov (United States)

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  15. Continuous utility factor in segregation models

    Science.gov (United States)

    Roy, Parna; Sen, Parongama

    2016-02-01

    We consider the constrained Schelling model of social segregation in which the utility factor of agents strictly increases and nonlocal jumps of the agents are allowed. In the present study, the utility factor u is defined in a way such that it can take continuous values and depends on the tolerance threshold as well as the fraction of unlike neighbors. Two models are proposed: in model A the jump probability is determined by the sign of u only, which makes it equivalent to the discrete model. In model B the actual values of u are considered. Model A and model B are shown to differ drastically as far as segregation behavior and phase transitions are concerned. In model A, although segregation can be achieved, the cluster sizes are rather small. Also, a frozen state is obtained in which steady states comprise many unsatisfied agents. In model B, segregated states with much larger cluster sizes are obtained. The correlation function is calculated to show quantitatively that larger clusters occur in model B. Moreover for model B, no frozen states exist even for very low dilution and small tolerance parameter. This is in contrast to the unconstrained discrete model considered earlier where agents can move even when utility remains the same. In addition, we also consider a few other dynamical aspects which have not been studied in segregation models earlier.

  16. Utility Independence of Multiattribute Utility Theory is Equivalent to Standard Sequence Invariance of Conjoint Measurement

    NARCIS (Netherlands)

    H. Bleichrodt (Han); J.N. Doctor (Jason); M. Filko (Martin); P.P. Wakker (Peter)

    2011-01-01

    textabstractUtility independence is a central condition in multiattribute utility theory, where attributes of outcomes are aggregated in the context of risk. The aggregation of attributes in the absence of risk is studied in conjoint measurement. In conjoint measurement, standard sequences have been

  17. Utility measurement in healthcare: the things I never got to.

    Science.gov (United States)

    Torrance, George W

    2006-01-01

    The present article provides a brief historical background on the development of utility measurement and cost-utility analysis in healthcare. It then outlines a number of research ideas in this field that the author never got to. The first idea is extremely fundamental. Why is health economics the only application of economics that does not use the discipline of economics? And, more importantly, what discipline should it use? Research ideas are discussed to investigate precisely the underlying theory and axiom systems of both Paretian welfare economics and the decision-theoretical utility approach. Can the two approaches be integrated or modified in some appropriate way so that they better reflect the needs of the health field? The investigation is described both for the individual and societal levels. Constructing a 'Robinson Crusoe' society of only a few individuals with different health needs, preferences and willingness to pay is suggested as a method for gaining insight into the problem. The second idea concerns the interval property of utilities and, therefore, QALYs. It specifically concerns the important requirement that changes of equal magnitude anywhere on the utility scale, or alternatively on the QALY scale, should be equally desirable. Unfortunately, one of the original restrictions on utility theory states that such comparisons are not permitted by the theory. It is shown, in an important new finding, that while this restriction applies in a world of certainty, it does not in a world of uncertainty, such as healthcare. Further research is suggested to investigate this property under both certainty and uncertainty. Other research ideas that are described include: the development of a precise axiomatic basis for the time trade-off method; the investigation of chaining as a method of preference measurement with the standard gamble or time trade-off; the development and training of a representative panel of the general public to improve the completeness

  18. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  19. Precision mass measurements utilizing beta endpoints

    CERN Document Server

    Moltz, D M; Kern, B D; Noma, H; Ritchie, B G; Toth, K S

    1981-01-01

    A technique for precise determination of beta endpoints with an intrinsic germanium detector has been developed. The energy calibration is derived from gamma -ray photopeak measurements. This analysis procedure has been checked with a /sup 27/Si source produced in a (p, n) reaction on a /sup 27/Al target and subsequently applied to mass separated samples of /sup 76/Rb, /sup 77/Rb and /sup 78/Rb. Results indicate errors <50 keV are obtainable. (29 refs).

  20. The Utility of Ada for Army Modeling

    Science.gov (United States)

    1990-04-10

    34 Ada " for Ada Lovelace (1815-1851), a mathematician who worked with Charles Babbage on his difference and analytic engines.9 Later in 1979, the HOLWG...OF ADA FOR ARMY MODELING BY COLONEL MICHAEL L. YOCOM DISTRIBUTION STATEMENT A: Approved for publie releases distribution is unlimited. 1% LF-, EC TE...TITLE (ad Subtitle) a. TYPE OF REPORT & PERIOD COVERED The Utility of Ada for Army Modeling Individual Study Project 6 PERFORMING ORG. REPORT NUMBER

  1. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  2. Study on utilizing ultrasonic for measurement of sediment concentration distribution

    Institute of Scientific and Technical Information of China (English)

    JiaChunjuan; TangMaoguan

    1998-01-01

    In the course of sedimentation research, the measurement of sediment concentration and its distribution is very important. At present, most traditional methods are arduous and cannot measure the sediment timely and successively. In order to seek the new measurement method,the paper reports utilizing ultrasonic measurement. When ultrasonic wave spreads along the depth in aqueous suspensions, the scatter intensity of sediment particles changes the depth and sediment concentration. Based on this principle,

  3. Evaluating the Utility of Adjoint-based Inverse Modeling with Aircraft and Surface Measurements during ARCTAS-CARB to Constrain Wildfire Emissions of Black Carbon

    Science.gov (United States)

    Henze, D. K.; Guerrette, J.; Bousserez, N.

    2016-12-01

    Wildfires contribute significantly to regional haze events globally, and they are potentially becoming more commonplace with increasing droughts due to climate change. Aerosol emissions from wildfires are highly uncertain, with global annual totals varying by a factor of 2 to 3 and regional rates varying by up to a factor of 10. At the high resolution required to predict PM2.5 exposure events, this variance is attributable to differences in methodology, differing land cover datasets, spatial variation in fire locations, and limited understanding of fast transient fire behavior. Here we apply an adjoint-based online chemical inverse modeling tool, WRFDA-Chem, to constrain black carbon aerosol (BC) emissions from fires during the 2008 ARCTAS-CARB field campaign. We identify several weaknesses in the prior diurnal distribution of emissions, including a missing early morning emission peak associated with local, persistent, large-scale forest fires. On 22 June, 2008, aircraft observations are able to reduce the spread between FINNv1.0 and QFEDv2.4r8 from ×3.5 to ×2.1. On 23 and 24 June, the spread is reduced from ×3.4 to ×1.4. Using posterior error estimates, we found that emission variance improvements are limited to a small footprint surrounding the measurements. Relative BB emission variances are reduced by up to 35% near aircraft flight paths and up to 60% near IMPROVE surface sites. Due to the spatial variation of observations on multiple days, and the heterogeneous biomass burning errors on daily scales, cross-validation was not successful. Future high-resolution measurements need to be carefully planned to characterize biomass burning emission errors and control for day-to-day variation. In general, the 4D-Var inversion framework would benefit from reduced wall-time. For the problem presented, incremental 4D-Var requires 20 hours on 96 cores to reach practical optimization convergence and generate the posterior covariance matrix for a 24-hour assimilation

  4. Modeling utilization distributions in space and time.

    Science.gov (United States)

    Keating, Kim A; Cherry, Steve

    2009-07-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r = 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep (Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed.

  5. Animal Models Utilized in HTLV-1 Research.

    Science.gov (United States)

    Panfil, Amanda R; Al-Saleem, Jacob J; Green, Patrick L

    2013-01-01

    Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1) over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP). Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, "humanized" mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  6. A New Preference Reversal in Health Utility Measurement

    NARCIS (Netherlands)

    H. Bleichrodt (Han); J.L. Pinto (Jose Luis)

    2007-01-01

    textabstractA central assumption in health utility measurement is that preferences are invariant to the elicitation method that is used. This assumption is challenged by preference reversals. Previous studies have observed preference reversals between choice and matching tasks and between choice and

  7. Modeling typical performance measures

    NARCIS (Netherlands)

    Weekers, Anke Martine

    2009-01-01

    In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl

  8. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  9. Utilization of Multispectral Images for Meat Color Measurements

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Lindbjerg; Carstensen, Jens Michael;

    2013-01-01

    This short paper describes how the use of multispectral imaging for color measurement can be utilized in an efficient and descriptive way for meat scientists. The basis of the study is meat color measurements performed with a multispectral imaging system as well as with a standard colorimeter....... It is described how different color spaces can enhance the purpose of the analysis - whether that is investigation of a single sample or a comparison between samples. Moreover the study describes how a simple segmentation can be applied to the multispectral images in order to reach a more descriptive measure...... of color and color variance than what is obtained by the standard colorimeter....

  10. The Health Utilities Index (HUI®: concepts, measurement properties and applications

    Directory of Open Access Journals (Sweden)

    Horsman John

    2003-10-01

    Full Text Available Abstract This is a review of the Health Utilities Index (HUI® multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2 and HUI Mark 3 (HUI3 instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels.

  11. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  12. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  13. Propeller aircraft interior noise model utilization study and validation

    Science.gov (United States)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  14. Comparative utility of disability progression measures in PPMS

    Science.gov (United States)

    Cutter, Gary R.; Giovannoni, Gavin; Uitdehaag, Bernard M.J.; Wolinsky, Jerry S.; Davis, Mat D.; Steinerman, Joshua R.; Knappertz, Volker

    2017-01-01

    Objective: To assess the comparative utility of disability progression measures in primary progressive MS (PPMS) using the PROMiSe trial data set. Methods: Data for patients randomized to placebo (n = 316) in the PROMiSe trial were included in this analysis. Disability was assessed using change in single (Expanded Disability Status Scale [EDSS], timed 25-foot walk [T25FW], and 9-hole peg test [9HPT]) and composite disability measures (EDSS/T25FW, EDSS/9HPT, and EDSS/T25FW/9HPT). Cumulative and cross-sectional unconfirmed disability progression (UDP) and confirmed disability progression (CDP; sustained for 3 months) rates were assessed at 12 and 24 months. Results: CDP rates defined by a ≥20% increase in T25FW were higher than those defined by EDSS score at 12 and 24 months. CDP rates defined by T25FW or EDSS score were higher than those defined by 9HPT score. The 3-part composite measure was associated with more CDP events (41.4% and 63.9% of patients at 12 and 24 months, respectively) than the 2-part measure (EDSS/T25FW [38.5% and 59.5%, respectively]) and any single measure. Cumulative UDP and CDP rates were higher than cross-sectional rates. Conclusions: The T25FW or composite measures of disability may be more sensitive to disability progression in patients with PPMS and should be considered as the primary endpoint for future studies of new therapies. CDP may be the preferred measure in classic randomized controlled trials in which cumulative disability progression rates are evaluated; UDP may be feasible for cross-sectional studies. PMID:28680915

  15. Reflections on Improvement of Utility Model System in China

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Even since 1985 when the Patent Law was launched in China, the utility model patent has been playing a very important role. Over the two decades, the utility model system has played an active part in encouraging invention-creations, and promoting the progress and development of science and technology.

  16. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  17. Latent Utility Shocks in a Structural Empirical Asset Pricing Model

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Raahauge, Peter

    We consider a random utility extension of the fundamental Lucas (1978) equilibriumasset pricing model. The resulting structural model leads naturally to a likelihoodfunction. We estimate the model using U.S. asset market data from 1871 to2000, using both dividends and earnings as state variables....... We find that current dividendsdo not forecast future utility shocks, whereas current utility shocks do forecastfuture dividends. The estimated structural model produces a sequence of predictedutility shocks which provide better forecasts of future long-horizon stock market returnsthan the classical...... dividend-price ratio.KEYWORDS: Randomutility, asset pricing, maximumlikelihood, structuralmodel,return predictability...

  18. Dynamic material strength measurement utilizing magnetically applied pressure-shear

    Directory of Open Access Journals (Sweden)

    Alexander C.S.

    2012-08-01

    Full Text Available Magnetically applied pressure-shear (MAPS is a recently developed technique used to measure dynamic material strength developed at Sandia National Laboratories utilizing magneto-hydrodynamic (MHD drive pulsed power systems. MHD drive platforms generate high pressures by passing a large current through a pair of parallel plate conductors which, in essence, form a single turn magnet coil. Lorentz forces resulting from the interaction of the self-generated magnetic field and the drive current repel the plates and result in a high pressure ramp wave propagating in the conductors. This is the principle by which the Sandia Z Machine operates for dynamic material testing. MAPS relies on the addition of a second, external magnetic field applied orthogonally to both the drive current and the self-generated magnetic field. The interaction of the drive current and this external field results in a shear wave being induced directly in the conductors. Thus both longitudinal and shear stresses are generated. These stresses are coupled to a sample material of interest where shear strength is probed by determining the maximum transmissible shear stress in the state defined by the longitudinal compression. Both longitudinal and transverse velocities are measured via a specialized velocity interferometer system for any reflector (VISAR. Pressure and shear strength of the sample are calculated directly from the VISAR data. Results of tests on several materials at modest pressures (∼10GPa will be presented and discussed.

  19. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  20. The effects of high-sugar ryegrass/red clover silage diets on intake, production, digestibility, and N utilization in dairy cows, as measured in vivo and predicted by the NorFor model.

    Science.gov (United States)

    Bertilsson, J; Åkerlind, M; Eriksson, T

    2017-10-01

    Grass silage-based diets often result in poor nitrogen utilization when fed to dairy cows. Perennial ryegrass cultivars with high concentrations of water-soluble carbohydrates (WSC) have proven potential for correcting this imbalance when fed fresh, and have also been shown to increase feed intake, milk production, and N utilization. The possibility of achieving corresponding effects with silage-based diets was investigated in change-over experiments in an incomplete block design with 16 (yr 1) or 12 (yr 2) Swedish Red dairy cows in mid lactation. Measurements on N excretion and rumen parameters were performed on subgroups of 8 and 4 cows, respectively. In yr 1, 2 ryegrass cultivars (standard = Fennema; high-WSC = Aberdart) and 2 cuts (first and second) were compared. In all treatments, ryegrass silage was mixed 75/25 on a dry matter (DM) basis, with red clover silage before feeding out. In yr 2, 1 basic mixture from the different cuts of these 2 cultivars was used and experimental factors were red clover silage inclusion (25 or 50%) and sucrose addition (0 or 10%) on a silage DM basis. Differences in WSC concentration in the silage mixtures in yr 1 were minor, whereas the differences between cuts were more substantial: 100 compared with 111 g/kg of DM for first-cut silage and 39 compared with 47 g/kg of DM for second-cut silage. The silages fed in yr 2 had a WSC concentration of 115 or 102 g/kg of DM (25 or 50% red clover, respectively), but when sucrose was added WSC concentration reached 198 and 189 g/kg of DM, respectively. Milk production (kg/d) did not differ between treatments in either year. Red clover inclusion to 50% of silage DM increased milk protein. Nitrogen efficiency (milk N/feed N) increased from 0.231 to 0.254 with sucrose inclusion in yr 2 (average for the 2 red clover levels). Overall rumen pH was 5.99 and increased sucrose level did not affect pH level or daily pH pattern. Sucrose addition reduced neutral detergent fiber digestibility

  1. Economic principles and fundamental model of the sustainable utilization of ecological resources

    Institute of Scientific and Technical Information of China (English)

    Du Jinpei; Li Lin

    2006-01-01

    By analyzing the basic rules and measurement principles of the sustainable utilization of ecological resources and constructing its mathematical model, this paper points out that the sustainable utilization of ecological resources is in nature to use the double-period model thousands of times for the dynamic distribution of ecological resources effectively. And it points out that in order to realize the sustainable utilization of ecological resources we must follow the basic principle - non-decreasing ecological capital and put forward corresponding standards, measures, policies and proposals.

  2. Measuring and modelling concurrency

    Directory of Open Access Journals (Sweden)

    Larry Sawers

    2013-02-01

    Full Text Available This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships: measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case.

  3. Heat Transmission Coefficient Measurements in Buildings Utilizing a Heat Loss Measuring Device

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    2013-01-01

    to optimize the energy performance. This paper presents a method for measuring the heat loss by utilizing a U-value meter. The U-value meter measures the heat transfer in the unit W/Km2 and has been used in several projects to upgrade the energy performance in temperate regions. The U-value meter was also...... and mechanical ventilation in the “warm countries” contribute to an enormous energy consumption and corresponding CO2 emission. In order to establish the best basis for upgrading the energy performance, it is important to make measurements of the heat losses at different places on a building facade, in order...

  4. Measuring the utility of losses by means of the tradeoff method

    NARCIS (Netherlands)

    Fennema, H; Van Assen, M

    1998-01-01

    This paper investigates the shape of the utility function for losses. From a rational point of view it can be argued that utility should be concave. Empirically measurements of the utility for losses show mixed results but most evidence supports convex rather than concave utilities. However, these m

  5. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  6. Fiscal 1995 coal production/utilization technology promotion subsidy/clean coal technology promotion business/regional model survey. Study report on `Environmental load reduction measures: feasibility study of a coal utilization eco/energy supply system` (interim report); 1995 nendo sekitan seisan riyo gijutsu shinkohi hojokin clean coal technology suishin jigyo chiiki model chosa. `Kankyo fuka teigen taisaku: sekitan riyo eko energy kyokyu system no kanosei chosa` chosa hokokusho (chukan hokoku)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    The coal utilization is expected to make substantial growth according to the long-term energy supply/demand plan. To further expand the future coal utilization, however, it is indispensable to reduce environmental loads in its total use with other energies, based on the coal use. In this survey, a regional model survey was conducted as environmental load reduction measures using highly cleaned coal which were taken in fiscal 1993 and 1994. Concretely, a model system was assumed which combined facilities for mixed combustion with coal and other energy (hull, bagasse, waste, etc.) and facilities for effective use of burned ash, and potential reduction in environmental loads of the model system was studied. The technology of mixed combustion between coal and other energy is still in a developmental stage with no novelties in the country. Therefore, the mixed combustion technology between coal and other energy is an important field which is very useful for the future energy supply/demand and environmental issues. 34 refs., 27 figs., 48 tabs.

  7. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  8. Estimation of utility values from visual analog scale measures of health in patients undergoing cardiac surgery

    Directory of Open Access Journals (Sweden)

    Oddershede L

    2014-01-01

    Full Text Available Lars Oddershede,1,2 Jan Jesper Andreasen,1 Lars Ehlers2 1Department of Cardiothoracic Surgery, Center for Cardiovascular Research, Aalborg University Hospital, Aalborg, Denmark; 2Danish Center for Healthcare Improvements, Faculty of Social Sciences and Faculty of Health Sciences, Aalborg University, Aalborg East, Denmark Introduction: In health economic evaluations, mapping can be used to estimate utility values from other health outcomes in order to calculate quality adjusted life-years. Currently, no methods exist to map visual analog scale (VAS scores to utility values. This study aimed to develop and propose a statistical algorithm for mapping five dimensions of health, measured on VASs, to utility scores in patients suffering from cardiovascular disease. Methods: Patients undergoing coronary artery bypass grafting at Aalborg University Hospital in Denmark were asked to score their health using the five VAS items (mobility, self-care, ability to perform usual activities, pain, and presence of anxiety or depression and the EuroQol 5 Dimensions questionnaire. Regression analysis was used to estimate four mapping models from patients' age, sex, and the self-reported VAS scores. Prediction errors were compared between mapping models and on subsets of the observed utility scores. Agreement between predicted and observed values was assessed using Bland–Altman plots. Results: Random effects generalized least squares (GLS regression yielded the best results when quadratic terms of VAS scores were included. Mapping models fitted using the Tobit model and censored least absolute deviation regression did not appear superior to GLS regression. The mapping models were able to explain approximately 63%–65% of the variation in the observed utility scores. The mean absolute error of predictions increased as the observed utility values decreased. Conclusion: We concluded that it was possible to predict utility scores from VAS scores of the five

  9. Multiobjective financial planning model for electric-utility rate regulation

    Energy Technology Data Exchange (ETDEWEB)

    Linke, C.M.; Whitford, D.T.

    1983-08-01

    The interests of the three parties to the regulatory process (investors in an electric utility, consumers, and regulators) are often in conflict. Investors are concerned with shareholder wealth maximization, while consumers desire dependable service at low rates. If the desired end product of regulation is to establish rates that balance the interests of consumers and investors, then a financial planning model is needed that accurately reflects the multi-objective nature of the regulatory decision process. This article develops such a multi-objective programming model for examining the efficient trade-offs available to utility regulators in setting rates of return. 8 references, 2 figures, 7 tables.

  10. The changing utility workforce and the emergence of building information modeling in utilities

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, A. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.

  11. Continuous utility factor in segregation models: a few surprises

    CERN Document Server

    Roy, Parna

    2015-01-01

    We consider the constrained Schelling model of social segregation which allows non-local jumps of the agents. In the present study, the utility factor u is defined in a way such that it can take continuous values and depends on the tolerance threshold as well as fraction of unlike neighbours. Two models are proposed: in model A the jump probability is determined by the sign of u only which makes it equivalent to the discrete model. In model B the actual values of u are considered. Model A and model B are shown to differ drastically as far as segregation behaviour and phase transitions are concerned. The constrained model B turns out to be as efficient as the unconstrained discrete model, if not more. In addition, we also consider a few other dynamical aspects which have not been studied in segregation models earlier.

  12. The Sustainable Energy Utility (SEU) Model for Energy Service Delivery

    Science.gov (United States)

    Houck, Jason; Rickerson, Wilson

    2009-01-01

    Climate change, energy price spikes, and concerns about energy security have reignited interest in state and local efforts to promote end-use energy efficiency, customer-sited renewable energy, and energy conservation. Government agencies and utilities have historically designed and administered such demand-side measures, but innovative…

  13. DISTRIBUTED PROCESSING TRADE-OFF MODEL FOR ELECTRIC UTILITY OPERATION

    Science.gov (United States)

    Klein, S. A.

    1994-01-01

    The Distributed processing Trade-off Model for Electric Utility Operation is based upon a study performed for the California Institute of Technology's Jet Propulsion Laboratory. This study presented a technique that addresses the question of trade-offs between expanding a communications network or expanding the capacity of distributed computers in an electric utility Energy Management System (EMS). The technique resulted in the development of a quantitative assessment model that is presented in a Lotus 1-2-3 worksheet environment. The model gives EMS planners a macroscopic tool for evaluating distributed processing architectures and the major technical and economic tradeoffs as well as interactions within these architectures. The model inputs (which may be varied according to application and need) include geographic parameters, data flow and processing workload parameters, operator staffing parameters, and technology/economic parameters. The model's outputs are total cost in various categories, a number of intermediate cost and technical calculation results, as well as graphical presentation of Costs vs. Percent Distribution for various parameters. The model has been implemented on an IBM PC using the LOTUS 1-2-3 spreadsheet environment and was developed in 1986. Also included with the spreadsheet model are a number of representative but hypothetical utility system examples.

  14. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    Institute of Scientific and Technical Information of China (English)

    WU Jian-Jun; GAO Zi-You; SUN Hui-Jun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics ofthis new network are given.

  15. Optimization Model to Enhance Sustainable Utilization of Resources

    Institute of Scientific and Technical Information of China (English)

    ZHAO Guo-hao; SHEN Tu-jing

    2002-01-01

    Resources are material foundation for sustainable development. To enhance resources utilization is the key factor for realizing sustainable development. Based on the idea of sustainable development, the paper establishes the model to effectively distribute resources and proposes a method for studies on sustainable development.

  16. A Utility-Based Approach to Some Information Measures

    Directory of Open Access Journals (Sweden)

    Sven Sandow

    2007-01-01

    Full Text Available We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of thesegeneralized quantities. We then consider these generalized quantities in an easily interpreted spe-cial case. We show that the resulting quantities, share many of the properties of entropy andrelative entropy, such as the data processing inequality and the second law of thermodynamics.We formulate an important statistical learning problem – probability estimation – in terms of ageneralized relative entropy. The solution of this problem reflects general risk preferences via theutility function; moreover, the solution is optimal in a sense of robust absolute performance.

  17. Nondestructive measurement of esophageal biaxial mechanical properties utilizing sonometry

    Science.gov (United States)

    Aho, Johnathon M.; Qiang, Bo; Wigle, Dennis A.; Tschumperlin, Daniel J.; Urban, Matthew W.

    2016-07-01

    Malignant esophageal pathology typically requires resection of the esophagus and reconstruction to restore foregut continuity. Reconstruction options are limited and morbid. The esophagus represents a useful target for tissue engineering strategies based on relative simplicity in comparison to other organs. The ideal tissue engineered conduit would have sufficient and ideally matched mechanical tolerances to native esophageal tissue. Current methods for mechanical testing of esophageal tissues both in vivo and ex vivo are typically destructive, alter tissue conformation, ignore anisotropy, or are not able to be performed in fluid media. The aim of this study was to investigate biomechanical properties of swine esophageal tissues through nondestructive testing utilizing sonometry ex vivo. This method allows for biomechanical determination of tissue properties, particularly longitudinal and circumferential moduli and strain energy functions. The relative contribution of mucosal-submucosal layers and muscular layers are compared to composite esophagi. Swine thoracic esophageal tissues (n  =  15) were tested by pressure loading using a continuous pressure pump system to generate stress. Preconditioning of tissue was performed by pressure loading with the pump system and pre-straining the tissue to in vivo length before data was recorded. Sonometry using piezocrystals was utilized to determine longitudinal and circumferential strain on five composite esophagi. Similarly, five mucosa-submucosal and five muscular layers from thoracic esophagi were tested independently. This work on esophageal tissues is consistent with reported uniaxial and biaxial mechanical testing and reported results using strain energy theory and also provides high resolution displacements, preserves native architectural structure and allows assessment of biomechanical properties in fluid media. This method may be of use to characterize mechanical properties of tissue engineered esophageal

  18. Population Propensity Measurement Model

    Science.gov (United States)

    1993-12-01

    school DQ702 Taken elementary algebra DQ703 Taken plane geometry DQ70 Taken computer science DQ706 Taken intermediate algebra DQ707 Taken trigonometry ...with separate models for distributing the arrival of applicants over FY’s, quarters, or months. The primary obstacle in these models is shifting the...to ŕ" = Otherwise DQ706 Binary: 1 = Taken intermediate Q706 is equal to ŕ" algebra, 0 = Otherwise DQ707 Binary: 1 = Taken trigonometry , 0 = Q707 is

  19. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  20. Measurement of nuclear fuel pin hydriding utilizing epithermal neutron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Miller, W.H. [Univ. of Missouri, Columbia, MO (United States); Farkas, D.M.; Lutz, D.R. [General Electric Co., Pleasanton, CA (United States)

    1996-12-31

    The measurement of hydrogen or zirconium hydriding in fuel cladding has long been of interest to the nuclear power industry. The detection of this hydrogen currently requires either destructive analysis (with sensitivities down to 1 {mu}g/g) or nondestructive thermal neutron radiography (with sensitivities on the order of a few weight percent). The detection of hydrogen in metals can also be determined by measuring the slowing down of neutrons as they collide and rapidly lose energy via scattering with hydrogen. This phenomenon is the basis for the {open_quotes}notched neutron spectrum{close_quotes} technique, also referred to as the Hysen method. This technique has been improved with the {open_quotes}modified{close_quotes} notched neutron spectrum technique that has demonstrated detection of hydrogen below 1 {mu}g/g in steel. The technique is nondestructive and can be used on radioactive materials. It is proposed that this technique be applied to the measurement of hydriding in zirconium fuel pins. This paper summarizes a method for such measurements.

  1. The zero condition: a simplying assumption in QALY measurement and mutliattribute utility

    NARCIS (Netherlands)

    P.P. Wakker; J.M. Miyamoto; H. Bleichrodt; H.J.M. Peters

    1998-01-01

    This paper studies the implications of the "zero-condition" for multiattribute utility theory. The zero-condition simplifies the measurement and derivation of the Quality Adjusted Life Year (QALY) measure commonly used in medical decision analysis. For general multiattribute utility theory, no simpl

  2. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  3. Measuring Constraint-Set Utility for Partitional Clustering Algorithms

    Science.gov (United States)

    Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato

    2006-01-01

    Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.

  4. The utility of resilience as a conceptual framework for understanding and measuring LGBTQ health.

    Science.gov (United States)

    Colpitts, Emily; Gahagan, Jacqueline

    2016-04-06

    Historically, lesbian, gay, bisexual, transgender and queer (LGBTQ) health research has focused heavily on the risks for poor health outcomes, obscuring the ways in which LGBTQ populations maintain and improve their health across the life course. In this paper we argue that informing culturally competent health policy and systems requires shifting the LGBTQ health research evidence base away from deficit-focused approaches toward strengths-based approaches to understanding and measuring LGBTQ health. We recently conducted a scoping review with the aim of exploring strengths-based approaches to LGBTQ health research. Our team found that the concept of resilience emerged as a key conceptual framework. This paper discusses a subset of our scoping review findings on the utility of resilience as a conceptual framework in understanding and measuring LGBTQ health. The findings of our scoping review suggest that the ways in which resilience is defined and measured in relation to LGBTQ populations remains contested. Given that LGBTQ populations have unique lived experiences of adversity and discrimination, and may also have unique factors that contribute to their resilience, the utility of heteronormative and cis-normative models of resilience is questionable. Our findings suggest that there is a need to consider further exploration and development of LGBTQ-specific models and measures of resilience that take into account structural, social, and individual determinants of health and incorporate an intersectional lens. While we fully acknowledge that the resilience of LGBTQ populations is central to advancing LGBTQ health, there remains much work to be done before the concept of resilience can be truly useful in measuring LGBTQ health.

  5. Quality of life, health status, and health service utilization related to a new measure of health literacy: FLIGHT/VIDAS.

    Science.gov (United States)

    Ownby, Raymond L; Acevedo, Amarilis; Jacobs, Robin J; Caballero, Joshua; Waldrop-Valverde, Drenna

    2014-09-01

    Researchers have identified significant limitations in some currently used measures of health literacy. The purpose of this paper is to present data on the relation of health-related quality of life, health status, and health service utilization to performance on a new measure of health literacy in a nonpatient population. The new measure was administered to 475 English- and Spanish-speaking community-dwelling volunteers along with existing measures of health literacy and assessments of health-related quality of life, health status, and healthcare service utilization. Relations among measures were assessed via correlations and health status and utilization was tested across levels of health literacy using ANCOVA models. The new health literacy measure is significantly related to existing measures of health literacy as well as to participants' health-related quality of life. Persons with lower levels of health literacy reported more health conditions, more frequent physical symptoms, and greater healthcare service utilization. The new measure of health literacy is valid and shows relations to measures of conceptually related constructs such as quality of life and health behaviors. FLIGHT/VIDAS may be useful to researchers and clinicians interested in a computer administered and scored measure of health literacy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. [Neurobiological disturbances in insomnia: clinical utility of objective sleep measures].

    Science.gov (United States)

    Pejović-Nikolić, Slobodanka

    2011-01-01

    Numerious studies conducted over the last 40 years assessing different physiological domains including autonomic nervous system status, whole body and brain metabolic rate, stress and immune system activity suggest that (1) insomnia is a disorder of physiologic hyperarousal present throughout 24-hour sleep-wake cycle, and (2) insomnia and sleep loss are two distinct states. Central nervous system hyperarousal, either as pre-existing and/or induced by psychiatric pathology and worsened by stressful events, as well as aging- and menopause-related physiological decline of sleep mechanisms appears to be at the core of this common sleep disorder. Finally, there is emerging evidence that the levels of physiologic hyperarousal are directly related to the degree of polysomnographically documented sleep disturbance in insomnia patients, suggesting that objective measures of sleep duration in insomnia may be a useful marker of the medical severity of the disorder.

  7. Quantitation of and measurements utilizing the sphenoid ridge.

    Science.gov (United States)

    Tubbs, R Shane; Salter, E George; Oakes, W Jerry

    2007-03-01

    The sphenoid ridge (posterior aspect of the lesser wings) is encountered in many intracranial procedures. Increased knowledge of its morphology and relationships is, therefore, of importance to the neurosurgeon and clinician who appreciate imaging of this anatomical region. We have quantitated this part of the sphenoid bone in dry human skulls (35) and made cadaveric (15) measurements between its parts and surrounding neuroanatomical structures in all three cranial fossae. The length of the left and right lesser wings was on average 4.2 and 4 cm, respectively. The mean widths of this bony part at its midline, midpoint, and lateral point (crista alaris) were 1.5 cm, 2.0 cm, and 2 mm, respectively. From the crista alaris the mean distances to the crista galli, V3 at its exit through the foramen ovale, entrance of the occulomotor nerve into the cavernous sinus, middle meningeal artery at its emergence from the foramen spinosum, and the facial and vestibulocochlear nerves at the internal auditory meatus were 4.9, 4.5, 5, 4.7, and 6.1 cm, respectively. From the midpoint of the lesser wing, the mean distances to the crista galli, V3 at its exit through the foramen ovale, entrance of the occulomotor nerve into the cavernous sinus, middle meningeal artery at its emergence from the foramen spinosum, and the facial and vestibulocochlear nerves at the internal auditory meatus were 4.2, 2.9, 3, 3.4, and 4.7 cm, respectively. From the anterior clinoid process of the lesser wing, the mean distances to the crista galli, V3 at its exit through the foramen ovale, entrance of the oculomotor nerve into the cavernous sinus, middle meningeal artery at its emergence from the foramen spinosum, and the facial and vestibulocochlear nerves at the internal auditory meatus were 4.3, 2.8, 1, 3.3, and 4.1 cm, respectively. Additional measurements between the parts of the sphenoid ridge and surrounding anatomical structures may assist the surgeon who operates in this region or the clinician who

  8. A Unified Derivation of Classical Subjective Expected Utility Models through Cardinal Utility

    NARCIS (Netherlands)

    Zank, H.; Wakker, P.P.

    1999-01-01

    Classical foundations of expected utility were provided by Ramsey, de Finetti, von Neumann & Morgenstern, Anscombe & Aumann, and others. These foundations describe preference conditions to capture the empirical content of expected utility. The assumed preference conditions, however, vary among the m

  9. Utilizing MRI to measure the transcytolemmal water exchange rate for the rat brain

    Science.gov (United States)

    Quirk, James D.; Bretthorst, G. Larry; Neil, Jeffrey J.

    2001-05-01

    Understanding the exchange of water between the intra- and extracellular compartments of the brain is important both for understanding basic physiology and for the interpretation of numerous MRI results. However, due to experimental difficulties, this basic property has proven difficult to measure in vivo. In our experiments, we will track overall changes in the relaxation rate constant of water in the rat brain following the administration of gadoteridol, a relaxation agent, to the extracellular compartment. From these changes, we will utilize probability theory and Markov Chain Monte Carlo simulations to infer the compartment specific water exchange and relaxation rate constants. Due to the correlated nature of these parameters and our inability to independently observe them, intelligent model selection is critical. Through analysis of simulated data sets, we refine our choice of model and method of data collection to optimize applicability to the in vivo situation.

  10. Rodent models of diabetic nephropathy: their utility and limitations.

    Science.gov (United States)

    Kitada, Munehiro; Ogura, Yoshio; Koya, Daisuke

    2016-01-01

    Diabetic nephropathy is the most common cause of end-stage renal disease. Therefore, novel therapies for the suppression of diabetic nephropathy must be developed. Rodent models are useful for elucidating the pathogenesis of diseases and testing novel therapies, and many type 1 and type 2 diabetic rodent models have been established for the study of diabetes and diabetic complications. Streptozotocin (STZ)-induced diabetic animals are widely used as a model of type 1 diabetes. Akita diabetic mice that have an Ins2+/C96Y mutation and OVE26 mice that overexpress calmodulin in pancreatic β-cells serve as a genetic model of type 1 diabetes. In addition, db/db mice, KK-Ay mice, Zucker diabetic fatty rats, Wistar fatty rats, Otsuka Long-Evans Tokushima Fatty rats and Goto-Kakizaki rats serve as rodent models of type 2 diabetes. An animal model of diabetic nephropathy should exhibit progressive albuminuria and a decrease in renal function, as well as the characteristic histological changes in the glomeruli and the tubulointerstitial lesions that are observed in cases of human diabetic nephropathy. A rodent model that strongly exhibits all these features of human diabetic nephropathy has not yet been developed. However, the currently available rodent models of diabetes can be useful in the study of diabetic nephropathy by increasing our understanding of the features of each diabetic rodent model. Furthermore, the genetic background and strain of each mouse model result in differences in susceptibility to diabetic nephropathy with albuminuria and the development of glomerular and tubulointerstitial lesions. Therefore, the validation of an animal model reproducing human diabetic nephropathy will significantly facilitate our understanding of the underlying genetic mechanisms that contribute to the development of diabetic nephropathy. In this review, we focus on rodent models of diabetes and discuss the utility and limitations of these models for the study of diabetic

  11. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    Science.gov (United States)

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  12. Malliavin's calculus in insider models: Additional utility and free lunches

    OpenAIRE

    2002-01-01

    We consider simple models of financial markets with regular traders and insiders possessing some extra information hidden in a random variable which is accessible to the regular trader only at the end of the trading interval. The problems we focus on are the calculation of the additional utility of the insider and a study of his free lunch possibilities. The information drift, i.e. the drift to eliminate in order to preserve the martingale property in the insider's filtration, turns out to be...

  13. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions.

  14. European utilities. What are the advantages, limits and perspectives of the multi-utilities model?; Les utilities europeennes. Quels sont les atouts, les limites et les perspectives du modele multi-utilities?

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    Since several years, the European utilities (public service companies) have started a movement of broadening of their offer. They now encompass the overall domain of collective services (mainly electricity, natural gas, wastes and water) to propose global offers. The European utilities also go deeper in their range of offers with the management of fluxes (fluid supplies), of services (energy) and the externalization of infrastructures (cogeneration unit, waste water treatment plants, etc..). The multi-utilities model (association of at least 2 collective services inside a same portfolio of activities) has become a standard. This strategic evolution, implemented since several years in Europe, has been possible thanks to regulation changes: opening of gas and electricity markets to competition, increase of environmental standard requirements, delegation of services. If the multi-utilities concept appears as attractive, several questions remain unanswered: what are the real synergies between collective service activities? Is the multi-utilities model viable? How to finance such a model? How to construct and commercialize global offers? What are the commercial and financial results of the multi-utilities model? What are the alternatives to this model? Which strategic future to the European utilities? This study makes a precise status of the European markets of collective services. It analyzes the strategy of the European groups which have adopted the multi-utilities model and shows the advantages and limits of this strategic model. The study includes an analysis of the financial performances of the 20 main European groups which allows to foresee two scenarios of mid-term evolution for this sector. (J.S.)

  15. Rodent models of diabetic nephropathy: their utility and limitations

    Directory of Open Access Journals (Sweden)

    Kitada M

    2016-11-01

    significantly facilitate our understanding of the underlying genetic mechanisms that contribute to the development of diabetic nephropathy. In this review, we focus on rodent models of diabetes and discuss the utility and limitations of these models for the study of diabetic nephropathy. Keywords: diabetic nephropathy, rodent model, albuminuria, mesangial matrix expansion, tubulointerstitial fibrosis

  16. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  17. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer

    Directory of Open Access Journals (Sweden)

    Hao Y

    2016-06-01

    Full Text Available Yanni Hao,1 Verena Wolfram,2 Jennifer Cook2 1Novartis Pharmaceuticals, East Hanover, NJ, USA; 2Adelphi Values, Bollington, UK Background: Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Methods: Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases, online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center, and web sites of health technology assessment (HTA bodies. Publications were selected based on the search strategy and the overall study objectives. Results: A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D. Other utility measures, such as the direct methods standard gamble (SG, time trade-off (TTO, and visual analog scale (VAS, were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Core 30 (EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Breast Cancer 23 (EORTC QLQ-BR23, Functional Assessment of Cancer Therapy – General questionnaire (FACT-G, and Utility-Based Questionnaire-Cancer (UBQ-C; most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex

  18. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  19. Generic Model to Send Secure Alerts for Utility Companies

    Directory of Open Access Journals (Sweden)

    Perez–Díaz J.A.

    2010-04-01

    Full Text Available In some industries such as logistics services, bank services, and others, the use of automated systems that deliver critical business information anytime and anywhere play an important role in the decision making process. This paper introduces a "Generic model to send secure alerts and notifications", which operates as a middleware between enterprise data sources and its mobile users. This model uses Short Message Service (SMS as its main mobile messaging technology, however is open to use new types of messaging technologies. Our model is interoperable with existing information systems, it can store any kind of information about alerts or notifications at different levels of granularity, it offers different types of notifications (as analert when critical business problems occur,asanotificationina periodical basis or as 2 way query. Notification rules can be customized by final users according to their preferences. The model provides a security framework in the cases where information requires confidentiality, it is extensible to existing and new messaging technologies (like e–mail, MMS, etc. It is a platform, mobile operator and hardware independent. Currently, our solution is being used at the Comisión Federal de Electricidad (Mexico's utility company to deliver secure alerts related to critical events registered in the main power generation plants of our country.

  20. Economic analysis of open space box model utilization in spacecraft

    Science.gov (United States)

    Mohammad, Atif F.; Straub, Jeremy

    2015-05-01

    It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.

  1. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of an Inflatable Module

    Science.gov (United States)

    Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio

    2012-01-01

    This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.

  2. Awareness of Occupational Injuries and Utilization of Safety Measures among Welders in Coastal South India

    Directory of Open Access Journals (Sweden)

    S Ganesh Kumar

    2013-10-01

    Full Text Available Background: Awareness of occupational hazards and its safety precautions among welders is an important health issue, especially in developing countries. Objective: To assess the awareness of occupational hazards and utilization of safety measures among welders in coastal South India. Methods: A cross-sectional study was conducted among 209 welders in Puducherry, South India. Baseline characteristics, awareness of health hazards, safety measures and their availability to and utilization by the participants were assessed using a pre-tested structured questionnaire. Results: The majority of studied welders aged between 20 and 40 years (n=160, 76.6% and had 1-10 years of education (n=181, 86.6%. They were more aware of hazards (n=174, 83.3% than safety measures (n=134, 64.1%. The majority of studied welders utilized at least one protective measure in the preceding week (n=200, 95.7%. Many of them had more than 5 years of experience (n=175, 83.7%, however, only 20% of them had institutional training (n=40, 19.1%. Age group, education level, and utilization of safety measures were significantly associated with awareness of hazards in univariate analysis (p<0.05. Conclusion: Awareness of occupational hazards and utilization of safety measures is low among welders in coastal South India, which highlights the importance of strengthening safety regulatory services towards this group of workers.

  3. Utility covariances and context effects in conjoint MNP models

    NARCIS (Netherlands)

    Haaijer, M.E.; Wedel, M.; Vriens, M.; Wansbeek, T.J.

    1998-01-01

    Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Legit (MNL) model. However, there are several problems associated with the standard MNL

  4. Utility covariances and context effects in conjoint MNP models

    NARCIS (Netherlands)

    Haaijer, M.E.; Wedel, M.; Vriens, M.; Wansbeek, T.J.

    1998-01-01

    Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Legit (MNL) model. However, there are several problems associated with the standard MNL m

  5. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  6. Mechanical Vibrations Modeling and Measurement

    CERN Document Server

    Schmitz, Tony L

    2012-01-01

    Mechanical Vibrations:Modeling and Measurement describes essential concepts in vibration analysis of mechanical systems. It incorporates the required mathematics, experimental techniques, fundamentals of modal analysis, and beam theory into a unified framework that is written to be accessible to undergraduate students,researchers, and practicing engineers. To unify the various concepts, a single experimental platform is used throughout the text to provide experimental data and evaluation. Engineering drawings for the platform are included in an appendix. Additionally, MATLAB programming solutions are integrated into the content throughout the text. This book also: Discusses model development using frequency response function measurements Presents a clear connection between continuous beam models and finite degree of freedom models Includes MATLAB code to support numerical examples that are integrated into the text narrative Uses mathematics to support vibrations theory and emphasizes the practical significanc...

  7. The utility of methacholine airway responsiveness measurements in evaluating anti-asthma drugs

    NARCIS (Netherlands)

    Inman, MD; Hamilton, AL; Kerstjens, HAM; Watson, RM; O'Byrne, PM

    1998-01-01

    Background: Measurements of airway responsiveness are frequently used to evaluate anti-asthma drugs. Objective: This study investigated the utility of methacholine airway responsiveness measurements in evaluating anti-asthma medications, both in terms of a bronchoprotective effect and the ability to

  8. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  9. Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires.

    Science.gov (United States)

    Leggett, Laura E; Khadaroo, Rachel G; Holroyd-Leduc, Jayna; Lorenzetti, Diane L; Hanson, Heather; Wagg, Adrian; Padwal, Raj; Clement, Fiona

    2016-03-01

    A variety of methods may be used to obtain costing data. Although administrative data are most commonly used, the data available in these datasets are often limited. An alternative method of obtaining costing is through self-reported questionnaires. Currently, there are no systematic reviews that summarize self-reported resource utilization instruments from the published literature.The aim of the study was to identify validated self-report healthcare resource use instruments and to map their attributes.A systematic review was conducted. The search identified articles using terms like "healthcare utilization" and "questionnaire." All abstracts and full texts were considered in duplicate. For inclusion, studies had to assess the validity of a self-reported resource use questionnaire, to report original data, include adult populations, and the questionnaire had to be publically available. Data such as type of resource utilization assessed by each questionnaire, and validation findings were extracted from each study.In all, 2343 unique citations were retrieved; 2297 were excluded during abstract review. Forty-six studies were reviewed in full text, and 15 studies were included in this systematic review. Six assessed resource utilization of patients with chronic conditions; 5 assessed mental health service utilization; 3 assessed resource utilization by a general population; and 1 assessed utilization in older populations. The most frequently measured resources included visits to general practitioners and inpatient stays; nonmedical resources were least frequently measured. Self-reported questionnaires on resource utilization had good agreement with administrative data, although, visits to general practitioners, outpatient days, and nurse visits had poorer agreement.Self-reported questionnaires are a valid method of collecting data on healthcare resource utilization.

  10. Self-report measures of patient utility: should we trust them?

    Science.gov (United States)

    Hanita, M

    2000-05-01

    As self-reports, measures of patient utility are susceptible to the effects of cognitive biases in patients. This article presents often overlooked problems in these measures by outlining cognitive processes involved in patient self-report. It is argued that these measures: 1) require overly complex mental operations; 2) fail to elicit thoughtful response by default; 3) may be biased by patients' mood; 4) are affected by both researchers' choice of measurement instruments and patients' choice of judgment strategies; 5) tend to reflect the disproportionate influence of patients' values that happen to be recallable at the time of measurement; and 6) are affected by patients' fear of regret. It is suggested that solutions for these problems should involve: a) improving the methods of administration; b) developing measures that are less taxing to patients; and c) redefining the concept of patient utility as judged, as opposed to retrieved, evaluation. Published by 2000 Elsevier Science Inc.

  11. Measuring Health Utilities in Children and Adolescents: A Systematic Review of the Literature.

    Directory of Open Access Journals (Sweden)

    Dominic Thorrington

    Full Text Available The objective of this review was to evaluate the use of all direct and indirect methods used to estimate health utilities in both children and adolescents. Utilities measured pre- and post-intervention are combined with the time over which health states are experienced to calculate quality-adjusted life years (QALYs. Cost-utility analyses (CUAs estimate the cost-effectiveness of health technologies based on their costs and benefits using QALYs as a measure of benefit. The accurate measurement of QALYs is dependent on using appropriate methods to elicit health utilities.We sought studies that measured health utilities directly from patients or their proxies. We did not exclude those studies that also included adults in the analysis, but excluded those studies focused only on adults.We evaluated 90 studies from a total of 1,780 selected from the databases. 47 (52% studies were CUAs incorporated into randomised clinical trials; 23 (26% were health-state utility assessments; 8 (9% validated methods and 12 (13% compared existing or new methods. 22 unique direct or indirect calculation methods were used a total of 137 times. Direct calculation through standard gamble, time trade-off and visual analogue scale was used 32 times. The EuroQol EQ-5D was the most frequently-used single method, selected for 41 studies. 15 of the methods used were generic methods and the remaining 7 were disease-specific. 48 of the 90 studies (53% used some form of proxy, with 26 (29% using proxies exclusively to estimate health utilities.Several child- and adolescent-specific methods are still being developed and validated, leaving many studies using methods that have not been designed or validated for use in children or adolescents. Several studies failed to justify using proxy respondents rather than administering the methods directly to the patients. Only two studies examined missing responses to the methods administered with respect to the patients' ages.

  12. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    Science.gov (United States)

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  13. Utility of the Canadian Occupational Performance Measure as an admission and outcome measure in interdisciplinary community-based geriatric rehabilitation

    DEFF Research Database (Denmark)

    Larsen, Anette Enemark; Carlsson, Gunilla

    2012-01-01

    In a community-based geriatric rehabilitation project, the Canadian Occupational Performance Measure (COPM) was used to develop a coordinated, interdisciplinary, and client-centred approach focusing on occupational performance. The purpose of this study was to evaluate the utility of the COPM...

  14. Survey review of models for use in market penetration analysis: utility sector focus

    Energy Technology Data Exchange (ETDEWEB)

    Groncki, P.J.; Kydes, A.S.; Lamontagne, J.; Marcuse, W.; Vinjamuri, G.

    1980-11-01

    The ultimate benefits of federal expenditures in research and development for new technologies are dependent upon the degree of acceptance of these technologies. Market penetration considerations are central to the problem of quantifying the potential benefits. These benefits are inputs to the selection process of projects competing for finite R and D funds. Market penetration is the gradual acceptance of a new commodity or technology. The Office of Coal utilization is concerned with the specialized area of market penetration of new electric power generation technologies for both replacement and new capacity. The common measure of market penetration is the fraction of the market serviced by the challenging technology for each time point considered. The methodologies for estimating market penetration are divided into three generic classes: integrated energy/economy modeling systems, utility capacity expansion models, and technology substitution models. In general, the integrated energy/economy modeling systems have three advantages: they provide internally consistent macro, energy-economy scenarios, they account for the effect of prices on demand by fuel form, and they explicitly capture the effects of population growth and the level and structure of economic activity on energy demand. A variety of deficiencies appear in most energy-economy systems models. All of the methodologies may be applied at some level to questions of market penetration of new technologies in the utility sector; choice of methods for a particular analysis must be conditioned by the scope of the analysis, data availability, and the relative cost of alternative analysis.

  15. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... (involving a comparison of presenting problems, history, and diagnoses) and used to formulate treatment considerations. We also considered 6 specific personality disorder types that could be derived from the profiles as defined in the DSM-5 Section III criteria. Results. Using the LPFS and PID-5, we were...... evaluation generally supported the utility for clinical purposes of the Alternative Model for Personality Disorders in Section III of the DSM-5, although it also identified some areas for refinement....

  16. Modeling non-monotone risk aversion using SAHARA utility functions

    NARCIS (Netherlands)

    A. Chen; A. Pelsser; M. Vellekoop

    2011-01-01

    We develop a new class of utility functions, SAHARA utility, with the distinguishing feature that it allows absolute risk aversion to be non-monotone and implements the assumption that agents may become less risk averse for very low values of wealth. The class contains the well-known exponential and

  17. Utilization of old vibro-acoustic measuring equipment to grasp basic concepts of vibration measurements

    DEFF Research Database (Denmark)

    Darula, Radoslav

    2013-01-01

    The aim of the paper is to show that even old vibro-acoustic (analog) equipment can be used as a very suitable teaching equipment to grasp basic principles of measurements in an era, when measurement equipments are more-or-less treated as ‘black-boxes’, i.e. the user cannot see directly how the m...... the measurement is processed, he or she just sets some parameters in a software and clicks a virtual button....

  18. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer.

    Science.gov (United States)

    Hao, Yanni; Wolfram, Verena; Cook, Jennifer

    2016-01-01

    Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases), online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center), and web sites of health technology assessment (HTA) bodies. Publications were selected based on the search strategy and the overall study objectives. A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D). Other utility measures, such as the direct methods standard gamble (SG), time trade-off (TTO), and visual analog scale (VAS), were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL) instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Core 30 (EORTC QLQ-C30), European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Breast Cancer 23 (EORTC QLQ-BR23), Functional Assessment of Cancer Therapy - General questionnaire (FACT-G), and Utility-Based Questionnaire-Cancer (UBQ-C); most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex, income, and education, as well as disease progression, choice of utility elicitation method, and country settings, were identified within the journal articles. Most

  19. Utilization of old vibro-acoustic measuring equipment to grasp basic concepts of vibration measurements

    DEFF Research Database (Denmark)

    Darula, Radoslav

    2013-01-01

    The aim of the paper is to show that even old vibro-acoustic (analog) equipment can be used as a very suitable teaching equipment to grasp basic principles of measurements in an era, when measurement equipments are more-or-less treated as ‘black-boxes’, i.e. the user cannot see directly how...

  20. Assessing the Utility of a Daily Log for Measuring Principal Leadership Practice

    Science.gov (United States)

    Camburn, Eric M.; Spillane, James P.; Sebastian, James

    2010-01-01

    Purpose: This study examines the feasibility and utility of a daily log for measuring principal leadership practice. Setting and Sample: The study was conducted in an urban district with approximately 50 principals. Approach: The log was assessed against two criteria: (a) Is it feasible to induce strong cooperation and high response rates among…

  1. Rethinking the dependent variable in voting behavior: On the measurement and analysis of electoral utilities

    NARCIS (Netherlands)

    Eijk, van der Cees; Brug, van der Wouter; Kroh, Martin; Franklin, Mark

    2006-01-01

    As a dependent variable, party choice did not lend itself to analysis by means of powerful multivariate methods until the coming of discrete-choice models, most notably conditional logit and multinomial logit. These methods involve estimating effects on party preferences (utilities) that are post ho

  2. Clinical utility of the DSM-5 alternative model of personality disorders: six cases from practice.

    Science.gov (United States)

    Bach, Bo; Markon, Kristian; Simonsen, Erik; Krueger, Robert F

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel instruments assess level of personality impairment and pathological traits. Objective. A number of studies have supported the psychometric qualities of the LPFS and the PID-5, but the utility of these instruments in clinical assessment and treatment has not been extensively evaluated. The goal of this study was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected (involving a comparison of presenting problems, history, and diagnoses) and used to formulate treatment considerations. We also considered 6 specific personality disorder types that could be derived from the profiles as defined in the DSM-5 Section III criteria. Results. Using the LPFS and PID-5, we were able to characterize the 6 cases in a meaningful and useful manner with regard to understanding and treatment of the individual patient and to match the cases with 6 relevant personality disorder types. Implications for ease of use, communication, and psychotherapy are discussed. Conclusion. Our evaluation generally supported the utility for clinical purposes of the Alternative Model for Personality Disorders in Section III of the DSM-5, although it also identified some areas for refinement. (Journal of Psychiatric Practice 2015;21:3-25).

  3. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  4. Goal-Programming Model Based on the Utility Function of the Decision-maker

    Institute of Scientific and Technical Information of China (English)

    WANG Zhi-jiang

    2001-01-01

    Based on the analysis of the problems in traditional GP model, this paper provides the model with the utility function of the decision-maker and compares this model with the one presented in reference article [1].

  5. Utility of WHOQOL-BREF in measuring quality of life in Sickle Cell Disease

    Directory of Open Access Journals (Sweden)

    Reid Marvin E

    2009-08-01

    Full Text Available Abstract Background Sickle cell disease is the commonest genetic disorder in Jamaica and most likely exerts numerous effects on quality of life (QOL of those afflicted with it. The WHOQOL-Bref, which is a commonly utilized generic measure of quality of life, has never previously been utilized in this population. We have sought to study its utility in this disease population. Methods 491 patients with sickle cell disease were administered the questionnaire including demographics, WHOQOL-Bref, Short Form-36 (SF-36, Flanagan's quality of life scale (QOLS and measures of disease severity at their routine health maintenance visits to the sickle cell unit. Internal consistency reliabilities, construct validity and "known groups" validity of the WHOQOL-Bref, and its domains, were examined; and then compared to those of the other instruments. Results All three instruments had good internal consistency, ranging from 0.70 to 0.93 for the WHOQOL-Bref (except the 'social relationships' domain, 0.86–0.93 for the SF-36 and 0.88 for the QOLS. None of the instruments showed any marked floor or ceiling effects except the SF-36 'physical health' and 'role limitations' domains. The WHOQOL-Bref scale also had moderate concurrent validity and showed strong "known groups" validity. Conclusion This study has shown good psychometric properties of the WHOQOL-Bref instrument in determining QOL of those with sickle cell disease. Its utility in this regard is comparable to that of the SF-36 and QOLS.

  6. Expected Utility and Catastrophic Risk in a Stochastic Economy-Climate Model

    NARCIS (Netherlands)

    Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.

    2010-01-01

    In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected util- ity with constant relative risk aversion (power utility); statisticians typi- cally

  7. Emergency Preparedness Education for Nurses: Core Competency Familiarity Measured Utilizing an Adapted Emergency Preparedness Information Questionnaire.

    Science.gov (United States)

    Georgino, Madeline M; Kress, Terri; Alexander, Sheila; Beach, Michael

    2015-01-01

    The purpose of this project was to measure trauma nurse improvement in familiarity with emergency preparedness and disaster response core competencies as originally defined by the Emergency Preparedness Information Questionnaire after a focused educational program. An adapted version of the Emergency Preparedness Information Questionnaire was utilized to measure familiarity of nurses with core competencies pertinent to first responder capabilities. This project utilized a pre- and postsurvey descriptive design and integrated education sessions into the preexisting, mandatory "Trauma Nurse Course" at large, level I trauma center. A total of 63 nurses completed the intervention during May and September 2014 sessions. Overall, all 8 competencies demonstrated significant (P < .001; 98% confidence interval) improvements in familiarity. In conclusion, this pilot quality improvement project demonstrated a unique approach to educating nurses to be more ready and comfortable when treating victims of a disaster.

  8. Unbounded Utility for Savage's "Foundations of Statistics," and Other Models

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1993-01-01

    textabstractA general procedure for extending finite-dimensional "additive-like" representations for binary relations to infinite-dimensional "integral-like" representations is developed by means of a condition called truncation-continuity. The restriction of boundedness of utility, met throughout t

  9. Unbounded utility for Savage's "Foundations of statistics," and other models

    NARCIS (Netherlands)

    Wakker, P.

    1993-01-01

    A general procedure for extending finite-dimensional additive-like representations to infinite-dimensional integral-like representations is developed by means of a condition called truncation-continuity. The restriction of boundedness of utility, met throughout the literature, can now be dispensed w

  10. Mathematical model of a utility firm. Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    The project was aimed at developing an understanding of the economic and behavioral processes that take place within a utility firm, and without it. This executive summary, one of five documents, gives the project goals and objectives, outlines the subject areas of investigation, discusses the findings and results, and finally considers applications within the electric power industry and future research directions. (DLC)

  11. Electric utility capacity expansion and energy production models for energy policy analysis

    Energy Technology Data Exchange (ETDEWEB)

    Aronson, E.; Edenburn, M.

    1997-08-01

    This report describes electric utility capacity expansion and energy production models developed for energy policy analysis. The models use the same principles (life cycle cost minimization, least operating cost dispatching, and incorporation of outages and reserve margin) as comprehensive utility capacity planning tools, but are faster and simpler. The models were not designed for detailed utility capacity planning, but they can be used to accurately project trends on a regional level. Because they use the same principles as comprehensive utility capacity expansion planning tools, the models are more realistic than utility modules used in present policy analysis tools. They can be used to help forecast the effects energy policy options will have on future utility power generation capacity expansion trends and to help formulate a sound national energy strategy. The models make renewable energy source competition realistic by giving proper value to intermittent renewable and energy storage technologies, and by competing renewables against each other as well as against conventional technologies.

  12. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  13. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  14. Clinical utility of current-generation dipole modelling of scalp EEG.

    Science.gov (United States)

    Plummer, C; Litewka, L; Farish, S; Harvey, A S; Cook, M J

    2007-11-01

    To investigate the clinical utility of current-generation dipole modelling of scalp EEG in focal epilepsies seen commonly in clinical practice. Scalp EEG recordings from 10 patients with focal epilepsy, five with Benign Focal Epilepsy of Childhood (BFEC) and five with Mesial Temporal Lobe Epilepsy (MTLE), were used for interictal spike dipole modelling using Scan 4.3 and CURRY 5.0. Optimum modelling parameters for EEG source localisation (ESL) were sought by the step-wise application of various volume conductor (forward) and dipole (inverse) models. Best-fit ESL solutions (highest explained forward-fit to measured data variance) were used to characterise best-fit forward and inverse models, regularisation effect, additional electrode effect, single-to-single spike and single-to-averaged spike variability, and intra- and inter-operator concordance. Inter-parameter relationships were examined. Computation times and interface problems were recorded. For both BFEC and MTLE, the best-fit forward model was the finite element method interpolated (FEMi) model, while the best-fit single dipole models were the rotating non-regularised and the moving regularised models. When combined, these forward-inverse models appeared to offer clinically meaningful ESL results when referenced to an averaged cortex overlay, best-fit dipoles localising to the central fissure region in BFEC and to the basolateral temporal region in MTLE. Single-to-single spike and single-to-averaged spike measures of concordance for dipole location and orientation were stronger for BFEC versus MTLE. The use of an additional pair of inferior temporal electrodes in MTLE directed best-fit dipoles towards the basomesial temporal region. Inverse correlations were noted between unexplained variance (RD) and dipole strength (Amp), RD and signal to noise ratio (SNR), and SNR and confidence ellipsoid (CE) volume. Intra- and inter-operator levels of agreement were relatively robust for dipole location and orientation

  15. Utility of ketone measurement in the prevention, diagnosis and management of diabetic ketoacidosis.

    Science.gov (United States)

    Misra, S; Oliver, N S

    2015-01-01

    Ketone measurement is advocated for the diagnosis of diabetic ketoacidosis and assessment of its severity. Assessing the evidence base for ketone measurement in clinical practice is challenging because multiple methods are available but there is a lack of consensus about which is preferable. Evaluating the utility of ketone measurement is additionally problematic because of variability in the biochemical definition of ketoacidosis internationally and in the proposed thresholds for ketone measures. This has led to conflicting guidance from expert bodies on how ketone measurement should be used in the management of ketoacidosis. The development of point-of-care devices that can reliably measure the capillary blood ketone β-hydroxybutyrate (BOHB) has widened the spectrum of applications of ketone measurement, but whether the evidence base supporting these applications is robust enough to warrant their incorporation into routine clinical practice remains unclear. The imprecision of capillary blood ketone measures at higher values, the lack of availability of routine laboratory-based assays for BOHB and the continued cost-effectiveness of urine ketone assessment prompt further discussion on the role of capillary blood ketone assessment in ketoacidosis. In the present article, we review the various existing methods of ketone measurement, the precision of capillary blood ketone as compared with other measures, its diagnostic accuracy in predicting ketoacidosis and other clinical applications including prevention, assessment of severity and resolution of ketoacidosis.

  16. Modeling energy flexibility of low energy buildings utilizing thermal mass

    DEFF Research Database (Denmark)

    Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten

    2016-01-01

    the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...... of the external envelope and the thermal capacity of the internal walls as the main parameters that affect the load shifting potential of the apartment....... to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study...

  17. A knowledge based model of electric utility operations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  18. Simultaneous measurements of umbilical uptake, fetal utilization rate, and fetal turnover rate of glucose.

    Science.gov (United States)

    Hay, W W; Sparks, J W; Quissell, B J; Battaglia, F C; Meschia, G

    1981-06-01

    Fetal umbilical glucose uptake was compared with simultaneous measurements of glucose turnover and utilization rates in 12 pregnant sheep, at a mean of 137 days gestational age (range, 118-146 days). Umbilical glucose uptake was calculated by application of the Fick principle. Fetal glucose turnover rate was measured by a primed-constant infusion of [14C]- and [3H]glucose (glucose turnover rate = tracer infusion rate divided by fetal glucose sp act). The calculation of fetal glucose utilization rate required substraction of the loss of tracer to the placenta from the tracer infusion rate, thus defining the net tracer entry into the fetus for direct comparison with the net umbilical glucose uptake. In fed, normoglycemic sheep, these measurements demonstrated statistical equivalence of umbilical glucose uptake rate (4.77 mg.min-1.kg-1 +/- 0.34 SE) and glucose utilization rate ([14C]glucose, 5.58 mg.min-1.kg-1 +/- 0.54 SE; and [3H]glucose, 7.19 mg.min-1.kg-1 +/- 1.24 SE) when tested by two-way analysis of variance (P greater than 0.1). In three fasted, hypoglycemic sheep, the umbilical glucose uptake rate fell to 1.43 mg.min-1.kg-1 +/- 0.56 SE, which was considerably lower than the simultaneous glucose utilization rate ([14C]glucose, 4.78 mg.min-1.kg-1 +/- 0.48 SE; and [3H]glucose, 6.81 mg.min-1.kg-1 +/- 2.19 SE). Thus, in the normoglycemic, late-gestation fetal lamb, there appears to be little glucogenesis, whereas glucogenesis may become significant during fasting-induced fetal hypoglycemia.

  19. Assessing the effect of the VHA PCMH model on utilization patterns among veterans with PTSD.

    Science.gov (United States)

    Randall, Ian; Maynard, Charles; Chan, Gary; Devine, Beth; Johnson, Chris

    2017-05-01

    The Veterans Health Administration (VHA) implemented a patient-centered medical home (PCMH)-based Patient Aligned Care Teams (PACT) model in 2010. We examined its effects on the utilization of health services among US veterans with posttraumatic stress disorder (PTSD). We analyzed VHA clinical and administrative data to conduct an interrupted time series study. Encounter-level data were obtained for the period of April 1, 2005, through March 31, 2014. We identified 642,660 veterans with PTSD who were assigned to either a high- or low-PCMH implementation group using a validated VHA PCMH measurement instrument. We measured the effect of high-PCMH implementation on the count of hospitalizations and primary care, specialty care, specialty mental health, emergency department (ED), and urgent care encounters compared with low-PCMH implementation. We fit a multilevel, mixed-effects, negative binomial regression model and estimated average marginal effects and incidence rate ratios. Compared with patients in low-PCMH implementation clinics, patients who received care in high-PCMH implementation clinics experienced a decrease in hospitalizations (incremental effect [IE], -0.036; 95% confidence interval [CI], -0.0371 to -0.0342), a decrease in specialty mental health encounters (IE, -0.009; 95% CI, -0.009 to -0.008), a decrease in urgent care encounters (IE, -0.210; 95% CI, -0.212 to -0.207), and a decrease in ED encounters (IE, -0.056; 95% CI, -0.057 to -0.054). High PCMH implementation positively affected utilization patterns by reducing downstream use of high-cost inpatient and specialty services. Future research should investigate whether a reduction in utilization of health services indeed results in higher levels of virtual and non-face-to-face access, or if the PACT model has reduced necessary access to care.

  20. Modeling and Optimizing Energy Utilization of Steel Production Process: A Hybrid Petri Net Approach

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2013-01-01

    Full Text Available The steel industry is responsible for nearly 9% of anthropogenic energy utilization in the world. It is urgent to reduce the total energy utilization of steel industry under the huge pressures on reducing energy consumption and CO2 emission. Meanwhile, the steel manufacturing is a typical continuous-discrete process with multiprocedures, multiobjects, multiconstraints, and multimachines coupled, which makes energy management rather difficult. In order to study the energy flow within the real steel production process, this paper presents a new modeling and optimization method for the process based on Hybrid Petri Nets (HPN in consideration of the situation above. Firstly, we introduce the detailed description of HPN. Then the real steel production process from one typical integrated steel plant is transformed into Hybrid Petri Net model as a case. Furthermore, we obtain a series of constraints of our optimization model from this model. In consideration of the real process situation, we pick the steel production, energy efficiency and self-made gas surplus as the main optimized goals in this paper. Afterwards, a fuzzy linear programming method is conducted to obtain the multiobjective optimization results. Finally, some measures are suggested to improve this low efficiency and high whole cost process structure.

  1. Utilization of coincidence criteria in absolute length measurements by optical interferometry in vacuum and air

    Science.gov (United States)

    Schödel, R.

    2015-08-01

    Traceability of length measurements to the international system of units (SI) can be realized by using optical interferometry making use of well-known frequencies of monochromatic light sources mentioned in the Mise en Pratique for the realization of the metre. At some national metrology institutes, such as Physikalisch-Technische Bundesanstalt (PTB) in Germany, the absolute length of prismatic bodies (e.g. gauge blocks) is realized by so-called gauge-block interference comparators. At PTB, a number of such imaging phase-stepping interference comparators exist, including specialized vacuum interference comparators, each equipped with three highly stabilized laser light sources. The length of a material measure is expressed as a multiple of each wavelength. The large number of integer interference orders can be extracted by the method of exact fractions in which the coincidence of the lengths resulting from the different wavelengths is utilized as a criterion. The unambiguous extraction of the integer interference orders is an essential prerequisite for correct length measurements. This paper critically discusses coincidence criteria and their validity for three modes of absolute length measurements: 1) measurements under vacuum in which the wavelengths can be identified with the vacuum wavelengths, 2) measurements under air in which the air refractive index is obtained from environmental parameters using an empirical equation, and 3) measurements under air in which the air refractive index is obtained interferometrically by utilizing a vacuum cell placed along the measurement pathway. For case 3), which corresponds to PTB’s Kösters-Comparator for long gauge blocks, the unambiguous determination of integer interference orders related to the air refractive index could be improved by about a factor of ten when an ‘overall dispersion value,’ suggested in this paper, is used as coincidence criterion.

  2. Utility of Arden Syntax for Representation of Fuzzy Logic in Clinical Quality Measures.

    Science.gov (United States)

    Jenders, Robert A

    2015-01-01

    Prior work has established that fuzzy logic is prevalent in clinical practice guidelines and that Arden Syntax is suitable for representing clinical quality measures (CQMs). Approved since then, Arden Syntax v2.9 (2012) has formal constructs for fuzzy logic even as new formalisms are proposed to represent quality logic. Determine the prevalence of fuzzy logic in CQMs and assess the utility of a contemporary version of Arden Syntax for representing them. Linguistic variables were tabulated in the 329 Assessing Care of the Vulnerable Elderly (ACOVE-3) CQMs, and these logic statements were encoded in Arden Syntax. In a total of 392 CQMs, linguistic variables occurred in 30.6%, and Arden Syntax could be used to represent these formally. Fuzzy logic occurs commonly in CQMs, and Arden Syntax offers particular utility for the representations of these constructs.

  3. Abbreviated quality of life scales for schizophrenia: comparison and utility of two brief community functioning measures.

    Science.gov (United States)

    Fervaha, Gagan; Foussias, George; Siddiqui, Ishraq; Agid, Ofer; Remington, Gary

    2014-04-01

    The Heinrichs-Carpenter Quality of Life Scale (QLS) is the most extensively used real-world community functioning scale in schizophrenia research. However, the extensive time required to administer it and the inclusion of items that overlap conceptually with negative symptoms limit its use across studies. The present study examined the validity and utility of two abbreviated QLS measures against the full QLS excluding negative symptom items. The sample included 1427 patients with schizophrenia who completed the baseline visit in the CATIE study. The validity of two abbreviated QLS measures (7-item and 4-item) were examined with the full QLS, excluding the intrapsychic foundations subscale, using correlation analysis. The utility of the abbreviated measures was explored by examining associations between the functioning scales and clinical variables and longitudinal change. Both abbreviated QLS measures were highly predictive of the full QLS (both r=0.91, pschizophrenia, especially when assessment of functional outcome is not the focus. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Validation of the SF-6D Health State Utilities Measure in Lower Extremity Sarcoma

    Directory of Open Access Journals (Sweden)

    Kenneth R. Gundle

    2014-01-01

    Full Text Available Aim. Health state utilities measures are preference-weighted patient-reported outcome (PRO instruments that facilitate comparative effectiveness research. One such measure, the SF-6D, is generated from the Short Form 36 (SF-36. This report describes a psychometric evaluation of the SF-6D in a cross-sectional population of lower extremity sarcoma patients. Methods. Patients with lower extremity sarcoma from a prospective database who had completed the SF-36 and Toronto Extremity Salvage Score (TESS were eligible for inclusion. Computed SF-6D health states were given preference weights based on a prior valuation. The primary outcome was correlation between the SF-6D and TESS. Results. In 63 pairs of surveys in a lower extremity sarcoma population, the mean preference-weighted SF-6D score was 0.59 (95% CI 0.4–0.81. The distribution of SF-6D scores approximated a normal curve (skewness = 0.11. There was a positive correlation between the SF-6D and TESS (r=0.75, P<0.01. Respondents who reported walking aid use had lower SF-6D scores (0.53 versus 0.61, P=0.03. Five respondents underwent amputation, with lower SF-6D scores that approached significance (0.48 versus 0.6, P=0.06. Conclusions. The SF-6D health state utilities measure demonstrated convergent validity without evidence of ceiling or floor effects. The SF-6D is a health state utilities measure suitable for further research in sarcoma patients.

  5. Pathologists' roles in clinical utilization management. A financing model for managed care.

    Science.gov (United States)

    Zhao, J J; Liberman, A

    2000-03-01

    In ancillary or laboratory utilization management, the roles of pathologists have not been explored fully in managed care systems. Two possible reasons may account for this: pathologists' potential contributions have not been defined clearly, and effective measurement of and reasonable compensation for the pathologist's contribution remains vague. The responsibilities of pathologists in clinical practice may include clinical pathology and laboratory services (which have long been well-defined and are compensated according to a resource-based relative value system-based coding system), laboratory administration, clinical utilization management, and clinical research. Although laboratory administration services have been compensated with mechanisms such as percentage of total service revenue or fixed salary, the involvement of pathologists seems less today than in the past, owing to increased clinical workload and time constraints in an expanding managed care environment, especially in community hospital settings. The lack of financial incentives or appropriate compensation mechanisms for the services likely accounts for the current situation. Furthermore, the importance of pathologist-driven utilization management in laboratory services lacks recognition among hospital administrators, managed care executives, and pathologists themselves, despite its potential benefits for reducing cost and enhancing quality of care. We propose a financial compensation model for such services and summarize its advantages.

  6. Measurement and Modeling: Infectious Disease Modeling

    NARCIS (Netherlands)

    Kretzschmar, MEE

    2016-01-01

    After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th

  7. Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis

    Science.gov (United States)

    Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier

    2016-08-01

    The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape

  8. National coal utilization assessment: modeling long-term coal production with the Argonne coal market model

    Energy Technology Data Exchange (ETDEWEB)

    Dux, C.D.; Kroh, G.C.; VanKuiken, J.C.

    1977-08-01

    The Argonne Coal Market Model was developed as part of the National Coal Utilization Assessment, a comprehensive study of coal-related environmental, health, and safety impacts. The model was used to generate long-term coal market scenarios that became the basis for comparing the impacts of coal-development options. The model has a relatively high degree of regional detail concerning both supply and demand. Coal demands are forecast by a combination of trend and econometric analysis and then input exogenously into the model. Coal supply in each region is characterized by a linearly increasing function relating increments of new mine capacity to the marginal cost of extraction. Rail-transportation costs are econometrically estimated for each supply-demand link. A quadratic programming algorithm is used to calculate flow patterns that minimize consumer costs for the system.

  9. Co-firing straw and coal in a 150-MWe utility boiler: in situ measurements

    DEFF Research Database (Denmark)

    Hansen, P. F.B.; Andersen, Karin Hedebo; Wieck-Hansen, K.;

    1998-01-01

    A 2-year demonstration program is carried out by the Danish utility I/S Midtkraft at a 150-MWe PF-boiler unit reconstructed for co-firing straw and coal. As a part of the demonstration program, a comprehensive in situ measurement campaign was conducted during the spring of 1996 in collaboration...... deposition propensities and high temperature corrosion during co-combustion of straw and coal in PF-boilers. Danish full scale results from co-firing straw and coal, the test facility and test program, and the potential theoretical support from the Technical University of Denmark are presented in this paper...

  10. About the parametrizations utilized to perform magnetic moments measurements using the transient field technique

    Science.gov (United States)

    Gómez, A. M.; Torres, D. A.

    2016-07-01

    The experimental study of nuclear magnetic moments, using the Transient Field technique, makes use of spin-orbit hyperfine interactions to generate strong magnetic fields, above the kilo-Tesla regime, capable to create a precession of the nuclear spin. A theoretical description of such magnetic fields is still under theoretical research, and the use of parametrizations is still a common way to address the lack of theoretical information. In this contribution, a review of the main parametrizations utilized in the measurements of Nuclear Magnetic Moments will be presented, the challenges to create a theoretical description from first principles will be discussed.

  11. Utility and limitations of measures of health inequities: a theoretical perspective

    Directory of Open Access Journals (Sweden)

    Olakunle Alonge

    2015-09-01

    Full Text Available What is already known on this subject?Various measures have been used in quantifying health inequities among populations in recent times; most of these measures were derived to capture the socioeconomic inequalities in health. These different measures do not always lend themselves to common interpretation by policy makers and health managers because they each reflect limited aspects of the concept of health inequities.What does this study add?To inform a more appropriate application of the different measures currently used in quantifying health inequities, this article explicates common theories underlying the definition of health inequities and uses this understanding to show the utility and limitations of these different measures. It also suggests some key features of an ideal indicator based on the conceptual understanding, with the hope of influencing future efforts in developing more robust measures of health inequities. The article also provides a conceptual ‘product label’ for the common measures of health inequities to guide users and ‘consumers’ in making more robust inferences and conclusions.This paper examines common approaches for quantifying health inequities and assesses the extent to which they incorporate key theories necessary for explicating the definition of health inequity. The first theoretical analysis examined the distinction between inter-individual and inter-group health inequalities as measures of health inequities. The second analysis considered the notion of fairness in health inequalities from different philosophical perspectives. To understand the extent to which different measures of health inequities incorporate these theoretical explanations, four criteria were used to assess each measure: 1 Does the indicator demonstrate inter-group or inter-individual health inequalities or both; 2 Does it reflect health inequalities in relation to socioeconomic position; 3 Is it sensitive to the absolute transfer of

  12. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    Science.gov (United States)

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  13. Recent advances in modeling nutrient utilization in ruminants1

    NARCIS (Netherlands)

    Kebreab, E.; Dijkstra, J.; Bannink, A.; France, J.

    2009-01-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, post-absorptive metabolism and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient uti

  14. Comparative assessment of diverse strategies for malaria vector population control based on measured rates at which mosquitoes utilize targeted resource subsets.

    Science.gov (United States)

    Killeen, Gerry F; Kiware, Samson S; Seyoum, Aklilu; Gimnig, John E; Corliss, George F; Stevenson, Jennifer; Drakeley, Christopher J; Chitnis, Nakul

    2014-08-28

    Eliminating malaria requires vector control interventions that dramatically reduce adult mosquito population densities and survival rates. Indoor applications of insecticidal nets and sprays are effective against an important minority of mosquito species that rely heavily upon human blood and habitations for survival. However, complementary approaches are needed to tackle a broader diversity of less human-specialized vectors by killing them at other resource targets. Impacts of strategies that target insecticides to humans or animals can be rationalized in terms of biological coverage of blood resources, quantified as proportional coverage of all blood resources mosquito vectors utilize. Here, this concept is adapted to enable impact prediction for diverse vector control strategies based on measurements of utilization rates for any definable, targetable resource subset, even if that overall resource is not quantifiable. The usefulness of this approach is illustrated by deriving utilization rate estimates for various blood, resting site, and sugar resource subsets from existing entomological survey data. Reported impacts of insecticidal nets upon human-feeding vectors, and insecticide-treated livestock upon animal-feeding vectors, are approximately consistent with model predictions based on measured utilization rates for those human and animal blood resource subsets. Utilization rates for artificial sugar baits compare well with blood resources, and are consistent with observed impact when insecticide is added. While existing data was used to indirectly measure utilization rates for a variety of resting site subsets, by comparison with measured rates of blood resource utilization in the same settings, current techniques for capturing resting mosquitoes underestimate this quantity, and reliance upon complex models with numerous input parameters may limit the applicability of this approach. While blood and sugar consumption can be readily quantified using existing

  15. Rodent models of cardiopulmonary bypass: utility in improving perioperative outcomes

    NARCIS (Netherlands)

    de Lange, F.

    2008-01-01

    Despite advances in surgical and anesthesia techniques, subtle neurologic injury still remains an important complication after cardiac surgery. Because the causes are multifactorial and complex, research in an appropriate small animal model for cardiopulmonary bypass (CPB) is warranted. This thesis

  16. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-05-02

    May 2, 2008 ... A simple model was proposed using the Logistic Equation for the growth,. Leudeking-Piret ... (melanoidin) which may create many problems and also .... Where, the constant µ is defined as the specific growth rate. Equation 1 ...

  17. Animal models of obsessive–compulsive disorder: utility and limitations

    Science.gov (United States)

    Alonso, Pino; López-Solà, Clara; Real, Eva; Segalàs, Cinto; Menchón, José Manuel

    2015-01-01

    Obsessive–compulsive disorder (OCD) is a disabling and common neuropsychiatric condition of poorly known etiology. Many attempts have been made in the last few years to develop animal models of OCD with the aim of clarifying the genetic, neurochemical, and neuroanatomical basis of the disorder, as well as of developing novel pharmacological and neurosurgical treatments that may help to improve the prognosis of the illness. The latter goal is particularly important given that around 40% of patients with OCD do not respond to currently available therapies. This article summarizes strengths and limitations of the leading animal models of OCD including genetic, pharmacologically induced, behavioral manipulation-based, and neurodevelopmental models according to their face, construct, and predictive validity. On the basis of this evaluation, we discuss that currently labeled “animal models of OCD” should be regarded not as models of OCD but, rather, as animal models of different psychopathological processes, such as compulsivity, stereotypy, or perseverance, that are present not only in OCD but also in other psychiatric or neurological disorders. Animal models might constitute a challenging approach to study the neural and genetic mechanism of these phenomena from a trans-diagnostic perspective. Animal models are also of particular interest as tools for developing new therapeutic options for OCD, with the greatest convergence focusing on the glutamatergic system, the role of ovarian and related hormones, and the exploration of new potential targets for deep brain stimulation. Finally, future research on neurocognitive deficits associated with OCD through the use of analogous animal tasks could also provide a genuine opportunity to disentangle the complex etiology of the disorder. PMID:26346234

  18. Utilization of remote sensing observations in hydrologic models

    Science.gov (United States)

    Ragan, R. M.

    1977-01-01

    Most of the remote sensing related work in hydrologic modeling has centered on modifying existing models to take advantage of the capabilities of new sensor techniques. There has been enough success with this approach to insure that remote sensing is a powerful tool in modeling the watershed processes. Unfortunately, many of the models in use were designed without recognizing the growth of remote sensing technology. Thus, their parameters were selected to be map or field crew definable. It is believed that the real benefits will come through the evolution of new models having new parameters that are developed specifically to take advantage of our capabilities in remote sensing. The ability to define hydrologically active areas could have a significant impact. The ability to define soil moisture and the evolution of new techniques to estimate evoportransportation could significantly modify our approach to hydrologic modeling. Still, without a major educational effort to develop an understanding of the techniques used to extract parameter estimates from remote sensing data, the potential offered by this new technology will not be achieved.

  19. Tritium Specific Adsorption Simulation Utilizing the OSPREY Model

    Energy Technology Data Exchange (ETDEWEB)

    Veronica Rutledge; Lawrence Tavlarides; Ronghong Lin; Austin Ladshaw

    2013-09-01

    During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. This report is discusses the development of a tritium specific adsorption model. Using the OSPREY model and integrating it with a fundamental level isotherm model developed under and experimental data provided by the NEUP grant, the tritium specific adsorption model was developed.

  20. Utilization of an ultrasound beam steering angle for measurements of tissue displacement vector and lateral displacement

    Directory of Open Access Journals (Sweden)

    Chikayoshi Sumi

    2010-09-01

    of a lateral displacement. However, for displacement vector measurements to describe complex tissue motions (eg, cardiac motion, if the axial coordinate corresponds to the depth direction in the target tissue, an ideal steering angle will be 45°. A two-dimensional echo simulation shows that for the block-matching methods, LM yields more accurate displacement vector measurements than ASTA, whereas with MAM and MDM using a moving average and a mirror setting and 1D methods, ASTA yields more accurate lateral displacement measurements than LM. The block-matching method requires fewer calculations than the moving average method; however, a lower degree of accuracy is obtained. As with LM, multidimensional measurement methods yield more accurate measurements with ASTA than the corresponding 1D measurement methods. Summarizing, for displacement vector measurements or lateral displacement measurements using the multidimensional measurement methods, the ranking of the degree of measurement accuracy and stability is ASTA with a mirror setting > LM with a moving average > LM with block matching > ASTA with block matching. Because every tissue has its own motion (heart, liver, etc and occasionally obstacles, such as bones, interfere with the measurements, the target tissue will determine the selection of the proper beamforming method with a choice between LM and ASTA. As for use with LM previously clarified, an appropriate displacement measurement method should also be selected for use with ASTA according to the echo signal-to-noise ratio, a required spatial resolution and a required calculation speed. ASTA, together with LM, can potentially enable the utilization of new aspects of displacement measurements.Keywords: a steering angle, lateral modulation, displacement vector measurement, lateral displacement measurement

  1. Method and apparatus for measuring gravitational acceleration utilizing a high temperature superconducting bearing

    Energy Technology Data Exchange (ETDEWEB)

    Hull, John R. (Downers Grove, IL)

    2000-01-01

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77K, whereby cooling may be accomplished with liquid nitrogen.

  2. Method and Apparatus for measuring Gravitational Acceleration Utilizing a high Temperature Superconducting Bearing

    Energy Technology Data Exchange (ETDEWEB)

    Hull, John R.

    1998-11-06

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operative temperature at or below 77K, whereby cooling maybe accomplished with liquid nitrogen.

  3. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    Science.gov (United States)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-02-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students.

  4. The clinical utility of CK-MB measurement in patients suspected of acute coronary syndrome.

    Science.gov (United States)

    Kim, Jaehyup; Hashim, Ibrahim A

    2016-05-01

    This study aims to assess the clinical utility of CK-MB measurement in patients suspected of acute coronary syndrome (ACS). All CK-MB and troponin T measurements performed MB concentrations were increased whereas troponin T concentrations were negative at MB result. In this group, the discordant normal CK-MB lowered suspicion for ACS in only 22 cases (2.7%). Most common interpretations for isolated positive troponin were demand ischemia and impaired renal function. In most cases, discordant CK-MB results were not considered a significant finding. In the setting of suspected ACS, CK-MB has limited clinical impact when contemporary troponin assay results are available. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Work Measurement Techniques Utilized by The Building Industry in The Midlands Province Of Zimbabwe

    Directory of Open Access Journals (Sweden)

    Tirivavi Moyo

    2014-07-01

    Full Text Available The Zimbabwean construction industry, both in the private and public sector, is characterized by cost and time overruns. Whilst the causes are innumerable, labour productivity control, through use of effective work measurement techniques, is paramount as labour constitutes a considerable portion of any construction project. It is therefore expedient that an investigation of the work measurement techniques utilized by the industry be undertaken. Focus was made on the Midlands province, it being resident to a considerable number of mining entities undergoing building construction growth momentum on the back of significant investments since 2009. The survey was undertaken through use of interview administered questionnaires on Construction Industry Federation of Zimbabwe registered companies that are resident in the province and on those that have undertaken or are undertaking construction projects within the same province. Construction companies in the Midlands province have overwhelmingly, albeit inappropriately, used the estimating technique as alluded to by 95% of the respondents. The outputs generated from use of this technique are significantly different from the actual outputs directly causing time overruns on the project sites. The other methods of time study at 33%, work sampling at 10% and synthesis at 5% have been sparingly utilized. The results from the use of time study and work sampling in combination with the estimating technique are within the allowable limits and hence these projects have no time overrun concerns emanating from the use of these techniques.

  6. Predictive Modeling of Defibrillation utilizing Hexahedral and Tetrahedral Finite Element Models: Recent Advances

    Science.gov (United States)

    Triedman, John K.; Jolley, Matthew; Stinstra, Jeroen; Brooks, Dana H.; MacLeod, Rob

    2008-01-01

    ICD implants may be complicated by body size and anatomy. One approach to this problem has been the adoption of creative, extracardiac implant strategies using standard ICD components. Because data on safety or efficacy of such ad hoc implant strategies is lacking, we have developed image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode locations. In this paper, we review recently published studies by our group using such models, and progress in meshing strategies to improve efficiency and visualization. Our preliminary observations predict that they may be large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs of various lead configurations are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement, but the simplified nature of current models dictates further development and validation prior to clinical or industrial utilization. PMID:18817926

  7. Icodextrin enhances survival in an intraperitoneal ovarian cancer murine model utilizing gene therapy.

    Science.gov (United States)

    Rocconi, Rodney P; Numnum, Michael T; Zhu, Zeng B; Lu, Baogen; Wang, Minghui; Rivera, Angel A; Stoff-Khalili, Mariam; Alvarez, Ronald D; Curiel, David T; Makhija, Sharmila

    2006-12-01

    Icodextrin, a novel glucose polymer solution utilized for peritoneal dialysis, has been demonstrated to have prolonged intraperitoneal (IP) instillation volumes in comparison to standard PBS solutions. In an animal model of ovarian cancer, we explored whether a survival advantage exists utilizing icodextrin rather than PBS as a delivery solution for an infectivity enhanced virotherapy approach. Initial experiments evaluated whether icodextrin would adversely affect replication of a clinical grade infectivity enhanced conditionally replicative adenovirus (Delta24-RGD). Virus was added to prepared blinded solutions of PBS or icodextrin (20%) and then evaluated in vitro in various human ovarian cancer cell lines (SKOV3.ip1, PA-1, and Hey) and in vivo in a SKOV3.ip1 human ovarian cancer IP murine model. Viral replication was measured by detecting adenovirus E4 gene levels utilizing QRT-PCR. Survival was subsequently evaluated in a separate SKOV3.ip1 ovarian cancer IP murine model. Cohorts of mice were treated in blinded fashion with PBS alone, icodextrin alone, PBS+Delta24-RGD, or icodextrin+Delta24-RGD. Survival data were plotted on Kaplan-Meier curve and statistical calculations performed using the log-rank test. There was no adverse affect of icodextrin on vector replication in the ovarian cancer cell lines nor murine model tumor samples evaluated. Median survival in the IP treated animal cohorts was 23 days for the PBS group, 40 days for the icodextrin group, 65 days for the PBS+Delta24-RGD group, and 105 days for icodextrin+Delta24-RGD (p=0.023). Of note, 5 of the 10 mice in the icodextrin+Delta24-RGD group were alive at the end of the study period, all without evidence of tumor (120 days). These experiments suggest that the use of dialysates such as icodextrin may further enhance the therapeutic effects of novel IP virotherapy and other gene therapy strategies for ovarian cancer. Phase I studies utilizing icodextrin-based virotherapy for ovarian cancer are

  8. Directional wave measurements and modelling

    Digital Repository Service at National Institute of Oceanography (India)

    Anand, N.M.; Nayak, B.U.; Bhat, S.S.; SanilKumar, V.

    Some of the results obtained from analysis of the monsoon directional wave data measured over 4 years in shallow waters off the west coast of India are presented. The directional spectrum computed from the time series data seems to indicate...

  9. Measuring corporate social responsibility using composite indices: Mission impossible? The case of the electricity utility industry

    Directory of Open Access Journals (Sweden)

    Juan Diego Paredes-Gazquez

    2016-01-01

    Full Text Available Corporate social responsibility is a multidimensional concept that is often measured using diverse indicators. Composite indices can aggregate these single indicators into one measurement. This article aims to identify the key challenges in constructing a composite index for measuring corporate social responsibility. The process is illustrated by the construction of a composite index for measuring social outcomes in the electricity utility industry. The sample consisted of seventy-four companies from twenty-three different countries, and one special administrative region operating in the industry in 2011. The findings show that (1 the unavailability of information about corporate social responsibility, (2 the particular characteristics of this information and (3 the weighting of indicators are the main obstacles when constructing the composite index. We highlight than an effective composite index should has a clear objective, a solid theoretical background and a robust structure. In a practical sense, it should be reconsidered how researchers use composite indexes to measure corporate social responsibility, as more transparency and stringency is needed when constructing these tools.

  10. Utility of skinfold thickness measurement in non-ambulatory patients with Duchenne muscular dystrophy.

    Science.gov (United States)

    Ishizaki, Masatoshi; Kedoin, Chika; Ueyama, Hidetsugu; Maeda, Yasushi; Yamashita, Satoshi; Ando, Yukio

    2017-01-01

    Nutritional disorders in Duchenne muscular dystrophy (DMD) worsen the medical condition. In particular, obesity is a serious problem that increases the risk of cardiomyopathy and affects nursing care. However, it is often difficult to evaluate body fatness in the advanced stages of DMD. Skinfold thickness measurement is a classical method to evaluate body fatness and is easily performed, even for bed-bound patients at home. We aimed to investigate the utility of skinfold thickness measurement in non-ambulatory DMD patients. Twenty-two patients with non-ambulatory, steroid-naive DMD ranging in age of 12-47 years were evaluated by body mass index (BMI), blood tests, measurement of triceps skinfold thickness (TSF), and abdominal computed tomography (CT) measurement of the areas of both subcutaneous and visceral fat. TSF showed good correlation with BMI (r = 0.80; p skinfold thickness measurement may be applicable as a screening tool in clinical practice where CT and magnetic resonance imaging assessment is often difficult in patients with advanced DMD. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    Science.gov (United States)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  12. User-owned utility models for rural electrification

    Energy Technology Data Exchange (ETDEWEB)

    Waddle, D.

    1997-12-01

    The author discusses the history of rural electric cooperatives (REC) in the United States, and the broader question of whether such organizations can serve as a model for rural electrification in other countries. The author points out the features of such cooperatives which have given them stability and strength, and emphasizes that for success of such programs, many of these same features must be present. He definitely feels the cooperative models are not outdated, but they need strong local support, and a governmental structure which is supportive, and in particular not negative.

  13. Recursive inter-generational utility in global climate risk modeling

    Energy Technology Data Exchange (ETDEWEB)

    Minh, Ha-Duong [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 75 - Paris (France); Treich, N. [Institut National de Recherches Agronomiques (INRA-LEERNA), 31 - Toulouse (France)

    2003-07-01

    This paper distinguishes relative risk aversion and resistance to inter-temporal substitution in climate risk modeling. Stochastic recursive preferences are introduced in a stylized numeric climate-economy model using preliminary IPCC 1998 scenarios. It shows that higher risk aversion increases the optimal carbon tax. Higher resistance to inter-temporal substitution alone has the same effect as increasing the discount rate, provided that the risk is not too large. We discuss implications of these findings for the debate upon discounting and sustainability under uncertainty. (author)

  14. Transgenic models of Alzheimer's disease: better utilization of existing models through viral transgenesis.

    Science.gov (United States)

    Platt, Thomas L; Reeves, Valerie L; Murphy, M Paul

    2013-09-01

    Animal models have been used for decades in the Alzheimer's disease (AD) research field and have been crucial for the advancement of our understanding of the disease. Most models are based on familial AD mutations of genes involved in the amyloidogenic process, such as the amyloid precursor protein (APP) and presenilin 1 (PS1). Some models also incorporate mutations in tau (MAPT) known to cause frontotemporal dementia, a neurodegenerative disease that shares some elements of neuropathology with AD. While these models are complex, they fail to display pathology that perfectly recapitulates that of the human disease. Unfortunately, this level of pre-existing complexity creates a barrier to the further modification and improvement of these models. However, as the efficacy and safety of viral vectors improves, their use as an alternative to germline genetic modification is becoming a widely used research tool. In this review we discuss how this approach can be used to better utilize common mouse models in AD research. This article is part of a Special Issue entitled: Animal Models of Disease.

  15. On the Utility of Island Models in Dynamic Optimization

    DEFF Research Database (Denmark)

    Lissovoi, Andrei; Witt, Carsten

    2015-01-01

    to λ=O(n1-ε), the (1+λ) EA is still not able to track the optimum of Maze. If the migration interval is increased, the algorithm is able to track the optimum even for logarithmic λ. Finally, the relationship of τ, λ, and the ability of the island model to track the optimum is investigated more closely....

  16. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  17. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  18. Animal models of β-hemoglobinopathies: utility and limitations

    Directory of Open Access Journals (Sweden)

    McColl B

    2016-11-01

    Full Text Available Bradley McColl, Jim Vadolas Cell and Gene Therapy Laboratory, Murdoch Childrens Research Institute, Royal Children’s Hospital, Parkville, VIC, Australia Abstract: The structural and functional conservation of hemoglobin throughout mammals has made the laboratory mouse an exceptionally useful organism in which to study both the protein and the individual globin genes. Early researchers looked to the globin genes as an excellent model in which to examine gene regulation – bountifully expressed and displaying a remarkably consistent pattern of developmental activation and silencing. In parallel with the growth of research into expression of the globin genes, mutations within the β-globin gene were identified as the cause of the β-hemoglobinopathies such as sickle cell disease and β-thalassemia. These lines of enquiry stimulated the development of transgenic mouse models, first carrying individual human globin genes and then substantial human genomic fragments incorporating the multigenic human β-globin locus and regulatory elements. Finally, mice were devised carrying mutant human β-globin loci on genetic backgrounds deficient in the native mouse globins, resulting in phenotypes of sickle cell disease or β-thalassemia. These years of work have generated a group of model animals that display many features of the β-hemoglobinopathies and provided enormous insight into the mechanisms of gene regulation. Substantive differences in the expression of human and mouse globins during development have also come to light, revealing the limitations of the mouse model, but also providing opportunities to further explore the mechanisms of globin gene regulation. In addition, animal models of β-hemoglobinopathies have demonstrated the feasibility of gene therapy for these conditions, now showing success in human clinical trials. Such models remain in use to dissect the molecular events of globin gene regulation and to identify novel treatments based

  19. An Algebraic Graphical Model for Decision with Uncertainties, Feasibilities, and Utilities

    CERN Document Server

    Pralet, C; Verfaillie, G; 10.1613/jair.2151

    2011-01-01

    Numerous formalisms and dedicated algorithms have been designed in the last decades to model and solve decision making problems. Some formalisms, such as constraint networks, can express "simple" decision problems, while others are designed to take into account uncertainties, unfeasible decisions, and utilities. Even in a single formalism, several variants are often proposed to model different types of uncertainty (probability, possibility...) or utility (additive or not). In this article, we introduce an algebraic graphical model that encompasses a large number of such formalisms: (1) we first adapt previous structures from Friedman, Chu and Halpern for representing uncertainty, utility, and expected utility in order to deal with generic forms of sequential decision making; (2) on these structures, we then introduce composite graphical models that express information via variables linked by "local" functions, thanks to conditional independence; (3) on these graphical models, we finally define a simple class ...

  20. ASPEN+ and economic modeling of equine waste utilization for localized hot water heating via fast pyrolysis

    Science.gov (United States)

    ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...

  1. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia;

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques, s......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  2. Rodent models of diabetic nephropathy: their utility and limitations

    OpenAIRE

    Kitada M; Ogura Y; Koya D

    2016-01-01

    Munehiro Kitada,1,2 Yoshio Ogura,2 Daisuke Koya1,2 1Division of Anticipatory Molecular Food Science and Technology, Medical Research Institute, 2Department of Diabetology and Endocrinology, Kanazawa Medical University, Uchinada, Ishikawa, Japan Abstract: Diabetic nephropathy is the most common cause of end-stage renal disease. Therefore, novel therapies for the suppression of diabetic nephropathy must be developed. Rodent models are useful for elucidating the pathogenesis of diseases and test...

  3. Comparative utility of disability progression measures in PPMS: Analysis of the PROMiSe data set.

    Science.gov (United States)

    Koch, Marcus W; Cutter, Gary R; Giovannoni, Gavin; Uitdehaag, Bernard M J; Wolinsky, Jerry S; Davis, Mat D; Steinerman, Joshua R; Knappertz, Volker

    2017-07-01

    To assess the comparative utility of disability progression measures in primary progressive MS (PPMS) using the PROMiSe trial data set. Data for patients randomized to placebo (n = 316) in the PROMiSe trial were included in this analysis. Disability was assessed using change in single (Expanded Disability Status Scale [EDSS], timed 25-foot walk [T25FW], and 9-hole peg test [9HPT]) and composite disability measures (EDSS/T25FW, EDSS/9HPT, and EDSS/T25FW/9HPT). Cumulative and cross-sectional unconfirmed disability progression (UDP) and confirmed disability progression (CDP; sustained for 3 months) rates were assessed at 12 and 24 months. CDP rates defined by a ≥20% increase in T25FW were higher than those defined by EDSS score at 12 and 24 months. CDP rates defined by T25FW or EDSS score were higher than those defined by 9HPT score. The 3-part composite measure was associated with more CDP events (41.4% and 63.9% of patients at 12 and 24 months, respectively) than the 2-part measure (EDSS/T25FW [38.5% and 59.5%, respectively]) and any single measure. Cumulative UDP and CDP rates were higher than cross-sectional rates. The T25FW or composite measures of disability may be more sensitive to disability progression in patients with PPMS and should be considered as the primary endpoint for future studies of new therapies. CDP may be the preferred measure in classic randomized controlled trials in which cumulative disability progression rates are evaluated; UDP may be feasible for cross-sectional studies.

  4. Identifying damage locations under ambient vibrations utilizing vector autoregressive models and Mahalanobis distances

    Science.gov (United States)

    Mosavi, A. A.; Dickey, D.; Seracino, R.; Rizkalla, S.

    2012-01-01

    This paper presents a study for identifying damage locations in an idealized steel bridge girder using the ambient vibration measurements. A sensitive damage feature is proposed in the context of statistical pattern recognition to address the damage detection problem. The study utilizes an experimental program that consists of a two-span continuous steel beam subjected to ambient vibrations. The vibration responses of the beam are measured along its length under simulated ambient vibrations and different healthy/damage conditions of the beam. The ambient vibration is simulated using a hydraulic actuator, and damages are induced by cutting portions of the flange at two locations. Multivariate vector autoregressive models were fitted to the vibration response time histories measured at the multiple sensor locations. A sensitive damage feature is proposed for identifying the damage location by applying Mahalanobis distances to the coefficients of the vector autoregressive models. A linear discriminant criterion was used to evaluate the amount of variations in the damage features obtained for different sensor locations with respect to the healthy condition of the beam. The analyses indicate that the highest variations in the damage features were coincident with the sensors closely located to the damages. The presented method showed a promising sensitivity to identify the damage location even when the induced damage was very small.

  5. Factors Impacting Student Service Utilization at Ontario Colleges: Key Performance Indicators as a Measure of Success: A Niagara College View

    Science.gov (United States)

    Veres, David

    2015-01-01

    Student success in Ontario College is significantly influenced by the utilization of student services. At Niagara College there has been a significant investment in student services as a strategy to support student success. Utilizing existing KPI data, this quantitative research project is aimed at measuring factors that influence both the use of…

  6. Ultrasonic transmission measurements in the characterization of viscoelasticity utilizing polymeric waveguides

    Science.gov (United States)

    Bause, Fabian; Rautenberg, Jens; Feldmann, Nadine; Webersen, Manuel; Claes, Leander; Gravenkamp, Hauke; Henning, Bernd

    2016-10-01

    For the numerical simulation of acoustic wave propagation in (measurement) systems and their design, the use of reliable material models and material parameters is a central issue. Especially in polymers, acoustic material parameters cannot be evaluated based on quasistatically measured parameters, as are specified in data sheets by the manufacturers. In this work, a measurement method is presented which quantifies, for a given polymeric material sample, a complex-valued and frequency-dependent material model. A novel three-dimensional approach for modeling viscoelasticity is introduced. The material samples are designed as hollow cylindrical waveguides to account for the high damping characteristics of the polymers under test and to provide an axisymmetric structure for good performance of waveguide modeling and reproducible coupling conditions arising from the smaller coupling area in the experiment. Ultrasonic transmission measurements are carried out between the parallel faces of the sample. To account for the frequency dependency of the material properties, five different transducer pairs with ascending central frequency from 750~\\text{kHz} to 2.5~\\text{MHz} are used. After passing through the sample, each of the five received signals contains information on the material parameters which are determined in an inverse procedure. The solution of the inverse problem is carried out by iterative comparison of an innovative forward SBFEM-based simulations of the entire measurement system with the experimentally determined measurement data. For a given solution of the inverse problem, an estimate of the measurement uncertainty of each identified material parameter is calculated. Moreover, a second measurement setup, based on laser-acoustic excitation of Lamb modes in plate-shaped specimens, is presented. Using this setup, the identified material properties can be verified on samples with a varied geometry, but made from the same material.

  7. Modeling the utility of binaural cues for underwater sound localization.

    Science.gov (United States)

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Utility of salivary enzyme immunoassays for measuring estradiol and testosterone in adolescents: a pilot study.

    Science.gov (United States)

    Amatoury, Mazen; Lee, Jennifer W; Maguire, Ann M; Ambler, Geoffrey R; Steinbeck, Katharine S

    2016-04-09

    We investigated the utility of enzyme immunoassay kits for measuring low levels of salivary estradiol and testosterone in adolescents and objectively assessed prevalence of blood contamination. Endocrine patients provided plasma and saliva for estradiol (females) or testosterone (males) assay. Saliva samples were also tested with a blood contamination kit. Picomolar levels of salivary estradiol in females failed to show any significant correlation with plasma values (r=0.20, p=0.37). The nanomolar levels of salivary testosterone in males showed a strong correlation (r=0.78, p<0.001). A significant number of saliva samples had blood contamination. After exclusion, correlations remained non-significant for estradiol, but strengthened for testosterone (r=0.88, p<0.001). The salivary estradiol enzyme immunoassay is not clinically informative at low levels. Users should interpret clinical saliva with caution due to potential blood contamination. Our data supports the utility of the salivary testosterone enzyme immunoassay for monitoring adolescent boys on hormone developmental therapy.

  9. UTILITY OF MECHANISTIC MODELS FOR DIRECTING ADVANCED SEPARATIONS RESEARCH & DEVELOPMENT ACTIVITIES: Electrochemically Modulated Separation Example

    Energy Technology Data Exchange (ETDEWEB)

    Schwantes, Jon M.

    2009-06-01

    The objective for this work was to demonstrate the utility of mechanistic computer models designed to simulate actinide behavior for use in efficiently and effectively directing advanced laboratory R&D activities associated with developing advanced separations methods.

  10. Accessing and Utilizing Remote Sensing Data for Vectorborne Infectious Diseases Surveillance and Modeling

    Science.gov (United States)

    Kiang, Richard; Adimi, Farida; Kempler, Steven

    2008-01-01

    Background: The transmission of vectorborne infectious diseases is often influenced by environmental, meteorological and climatic parameters, because the vector life cycle depends on these factors. For example, the geophysical parameters relevant to malaria transmission include precipitation, surface temperature, humidity, elevation, and vegetation type. Because these parameters are routinely measured by satellites, remote sensing is an important technological tool for predicting, preventing, and containing a number of vectorborne infectious diseases, such as malaria, dengue, West Nile virus, etc. Methods: A variety of NASA remote sensing data can be used for modeling vectorborne infectious disease transmission. We will discuss both the well known and less known remote sensing data, including Landsat, AVHRR (Advanced Very High Resolution Radiometer), MODIS (Moderate Resolution Imaging Spectroradiometer), TRMM (Tropical Rainfall Measuring Mission), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), EO-1 (Earth Observing One) ALI (Advanced Land Imager), and SIESIP (Seasonal to Interannual Earth Science Information Partner) dataset. Giovanni is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center. It provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data. After remote sensing data is obtained, a variety of techniques, including generalized linear models and artificial intelligence oriented methods, t 3 can be used to model the dependency of disease transmission on these parameters. Results: The processes of accessing, visualizing and utilizing precipitation data using Giovanni, and acquiring other data at additional websites are illustrated. Malaria incidence time series for some parts of Thailand and Indonesia are used to demonstrate that malaria incidences are reasonably well modeled with generalized linear models and artificial

  11. Crew Autonomy Measures and Models (CAMM) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SA Technologies will employ a two-part solution including measures and models for evaluating crew autonomy in exploratory space missions. An integrated measurement...

  12. Analysis and Flexible Structural Modeling for Oscillating Wing Utilizing Aeroelasticity

    Institute of Scientific and Technical Information of China (English)

    Shao Ke; Wu Zhigang; Yang Chao

    2008-01-01

    Making use of modal characteristics of the natural vibration of flexible structure to design the oscillating wing aircraft is proposed.A series of equations concerning the oscillating wing of flexible structures are derived. The kinetic equation for aerodynamic force coupled with elastic movement is set up, and relevant formulae are derived. The unsteady aerodynamic one in that formulae is revised. The design principle, design process and range of application of such oscillating wing analytical method are elaborated. A flexible structural oscillating wing model is set up, and relevant time response analysis and frequency response analysis are conducted. The analytical results indicate that adopting the new-type driving way for the oscillating wing will not have flutter problems and will be able to produce propulsive force. Furthermore, it will consume much less power than the fixed wing for generating the same lift.

  13. A customer satisfaction model for a utility service industry

    Science.gov (United States)

    Jamil, Jastini Mohd; Nawawi, Mohd Kamal Mohd; Ramli, Razamin

    2016-08-01

    This paper explores the effect of Image, Customer Expectation, Perceived Quality and Perceived Value on Customer Satisfaction, and to investigate the effect of Image and Customer Satisfaction on Customer Loyalty of mobile phone provider in Malaysia. The result of this research is based on data gathered online from international students in one of the public university in Malaysia. Partial Least Squares Structural Equation Modeling (PLS-SEM) has been used to analyze the data that have been collected from the international students' perceptions. The results found that Image and Perceived Quality have significant impact on Customer Satisfaction. Image and Customer Satisfaction ware also found to have significantly related to Customer Loyalty. However, no significant impact has been found between Customer Expectation with Customer Satisfaction, Perceived Value with Customer Satisfaction, and Customer Expectation with Perceived Value. We hope that the findings may assist the mobile phone provider in production and promotion of their services.

  14. A Utility-Based Reputation Model for the Internet of Things

    OpenAIRE

    Aziz, Benjamin; Fremantle, Paul; Wei, Rui; Arenas, Alvaro

    2016-01-01

    Part 7: TPM and Internet of Things; International audience; The MQTT protocol has emerged over the past decade as a key protocol for a number of low power and lightweight communication scenarios including machine-to-machine and the Internet of Things. In this paper we develop a utility-based reputation model for MQTT, where we can assign a reputation score to participants in a network based on monitoring their behaviour. We mathematically define the reputation model using utility functions on...

  15. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  16. On how access to an insurance market affects investments in safety measures, based on the expected utility theory

    Energy Technology Data Exchange (ETDEWEB)

    Bjorheim Abrahamsen, Eirik, E-mail: eirik.b.abrahamsen@uis.n [University of Stavanger, 4036 Stavanger (Norway); Asche, Frank [University of Stavanger, 4036 Stavanger (Norway)

    2011-03-15

    This paper focuses on how access to an insurance market should influence investments in safety measures in accordance with the ruling paradigm for decision-making under uncertainty-the expected utility theory. We show that access to an insurance market in most situations will influence investments in safety measures. For an expected utility maximizer, an overinvestment in safety measures is likely if access to an insurance market is ignored, while an underinvestment in safety measures is likely if insurance is purchased without paying attention to the possibility for reducing the probability and/or consequences of an accidental event by safety measures.

  17. Vadose zone measurement and modeling

    OpenAIRE

    Hopmans, J.W.; V. Clausnitzer; K.I. Kosugi; Nielsen,D.R.; Somma, F.

    1997-01-01

    The following treatise is a summary of some of the ongoing research activities in the soil physics program at the University of California in Davis. Each of the four listed areas win be presented at the Workshop on special topics on soil physics and crop modeling in Piracicaba at the University of Sao Paulo. We limited ourselves to a general overview of each area, but will present a more thorough discussion with examples at the Workshop.

  18. Measuring the Capacity Utilization of Public District Hospitals in Tunisia: Using Dual Data Envelopment Analysis Approach

    Directory of Open Access Journals (Sweden)

    Chokri Arfa

    2017-01-01

    Full Text Available Background Public district hospitals (PDHs in Tunisia are not operating at full plant capacity and underutilize their operating budget. Methods Individual PDHs capacity utilization (CU is measured for 2000 and 2010 using dual data envelopment analysis (DEA approach with shadow prices input and output restrictions. The CU is estimated for 101 of 105 PDH in 2000 and 94 of 105 PDH in 2010. Results In average, unused capacity is estimated at 18% in 2010 vs. 13% in 2000. Of PDHs 26% underutilize their operating budget in 2010 vs. 21% in 2000. Conclusion Inadequate supply, health quality and the lack of operating budget should be tackled to reduce unmet user’s needs and the bypassing of the PDHs and, thus to increase their CU. Social health insurance should be turned into a direct purchaser of curative and preventive care for the PDHs.

  19. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  20. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as represent

  1. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  2. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    Science.gov (United States)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  3. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare

    Directory of Open Access Journals (Sweden)

    Squires Janet E

    2011-07-01

    Full Text Available Abstract Background In healthcare, a gap exists between what is known from research and what is practiced. Understanding this gap depends upon our ability to robustly measure research utilization. Objectives The objectives of this systematic review were: to identify self-report measures of research utilization used in healthcare, and to assess the psychometric properties (acceptability, reliability, and validity of these measures. Methods We conducted a systematic review of literature reporting use or development of self-report research utilization measures. Our search included: multiple databases, ancestry searches, and a hand search. Acceptability was assessed by examining time to complete the measure and missing data rates. Our approach to reliability and validity assessment followed that outlined in the Standards for Educational and Psychological Testing. Results Of 42,770 titles screened, 97 original studies (108 articles were included in this review. The 97 studies reported on the use or development of 60 unique self-report research utilization measures. Seven of the measures were assessed in more than one study. Study samples consisted of healthcare providers (92 studies and healthcare decision makers (5 studies. No studies reported data on acceptability of the measures. Reliability was reported in 32 (33% of the studies, representing 13 of the 60 measures. Internal consistency (Cronbach's Alpha reliability was reported in 31 studies; values exceeded 0.70 in 29 studies. Test-retest reliability was reported in 3 studies with Pearson's r coefficients > 0.80. No validity information was reported for 12 of the 60 measures. The remaining 48 measures were classified into a three-level validity hierarchy according to the number of validity sources reported in 50% or more of the studies using the measure. Level one measures (n = 6 reported evidence from any three (out of four possible Standards validity sources (which, in the case of single item

  4. Top-level modeling of an als system utilizing object-oriented techniques

    Science.gov (United States)

    Rodriguez, L. F.; Kang, S.; Ting, K. C.

    The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system.This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models.

  5. Seamless Location Measuring System with Wifi Beacon Utilized and GPS Receiver based Systems in Both of Indoor and Outdoor Location Measurements

    OpenAIRE

    Kohei Arai

    2015-01-01

    A seamless location measuring system with WiFi beacon utilized and GPS receiver based systems in both of indoor and outdoor location measurements is proposed. Through the experiments in both of indoor and outdoor, it is found that location measurement accuracy is around 2-3 meters for the locations which are designated in both of indoor and outdoor.

  6. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  7. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  8. Using Analytical and Numerical Modeling to Assess the Utility of Groundwater Monitoring Parameters at Carbon Capture, Utilization, and Storage Sites

    Science.gov (United States)

    Porse, S. L.; Hovorka, S. D.; Young, M.; Zeidouni, M.

    2012-12-01

    Carbon capture, utilization, and storage (CCUS) is becoming an important bridge to commercial geologic sequestration (GS) to help reduce anthropogenic CO2 emissions. While CCUS at brownfield sites (i.e. mature oil and gas fields) has operational advantages over GS at greenfield sites (i.e. saline formations) such as the use of existing well infrastructure, previous site activities can add a layer of complexity that must be accounted for when developing groundwater monitoring protection networks. Extensive work has been done on developing monitoring networks at GS sites for CO2 accounting and groundwater protection. However, the development of appropriate monitoring strategies at commercial brownfield sites continues to develop. The goals of this research are to address the added monitoring complexity by adapting simple analytical and numerical models to test these approaches using two common subsurface monitoring parameters, pressure and aqueous geochemistry. The analytical pressure model solves for diffusivity in radial coordinates and the leakage rate derived from Darcy's law. The aqueous geochemical calculation computer program PHREEQC solves the advection-reaction-dispersion equation for 1-D transport and mixing of fluids .The research was conducted at a CO2 enhanced oil recovery (EOR) field on the Gulf Coast of Texas. We modeled the performance over time of one monitoring well from the EOR field using physical and operational data including lithology and water chemistry samples, and formation pressure data. We explored through statistical analyses the probability of leakage detection using the analytical and numerical methods by varying the monitoring well location spatially and vertically with respect to a leaky fault. Preliminary results indicate that a pressure based subsurface monitoring system provides a better probability of leakage detection than geochemistry alone, but together these monitoring parameters can improve the chances of leakage detection

  9. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  10. Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-09-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was

  11. National Utility Financial Statement model (NUFS). Volume III of III: software description. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-10-29

    This volume contains a description of the software comprising the National Utility Financial Statement Model (NUFS). This is the third of three volumes describing NUFS provided by ICF Incorporated under contract DEAC-01-79EI-10579. The three volumes are entitled: model overview and description, user's guide, and software guide.

  12. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    Science.gov (United States)

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  13. Deriving utility scores for co-morbid conditions: a test of the multiplicative model for combining individual condition scores

    Directory of Open Access Journals (Sweden)

    Le Petit Christel

    2006-10-01

    Full Text Available Abstract Background The co-morbidity of health conditions is becoming a significant health issue, particularly as populations age, and presents important methodological challenges for population health research. For example, the calculation of summary measures of population health (SMPH can be compromised if co-morbidity is not taken into account. One popular co-morbidity adjustment used in SMPH computations relies on a straightforward multiplicative combination of the severity weights for the individual conditions involved. While the convenience and simplicity of the multiplicative model are attractive, its appropriateness has yet to be formally tested. The primary objective of the current study was therefore to examine the empirical evidence in support of this approach. Methods The present study drew on information on the prevalence of chronic conditions and a utility-based measure of health-related quality of life (HRQoL, namely the Health Utilities Index Mark 3 (HUI3, available from Cycle 1.1 of the Canadian Community Health Survey (CCHS; 2000–01. Average HUI3 scores were computed for both single and co-morbid conditions, and were also purified by statistically removing the loss of functional health due to health problems other than the chronic conditions reported. The co-morbidity rule was specified as a multiplicative combination of the purified average observed HUI3 utility scores for the individual conditions involved, with the addition of a synergy coefficient s for capturing any interaction between the conditions not explained by the product of their utilities. The fit of the model to the purified average observed utilities for the co-morbid conditions was optimized using ordinary least squares regression to estimate s. Replicability of the results was assessed by applying the method to triple co-morbidities from the CCHS cycle 1.1 database, as well as to double and triple co-morbidities from cycle 2.1 of the CCHS (2003–04. Results

  14. A generalized measurement model to quantify health: the multi-attribute preference response model.

    Directory of Open Access Journals (Sweden)

    Paul F M Krabbe

    Full Text Available After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.

  15. Adolescent idiopathic scoliosis screening for school, community, and clinical health promotion practice utilizing the PRECEDE-PROCEED model

    Directory of Open Access Journals (Sweden)

    Wyatt Lawrence A

    2005-11-01

    Full Text Available Abstract Background Screening for adolescent idiopathic scoliosis (AIS is a commonly performed procedure for school children during the high risk years. The PRECEDE-PROCEDE (PP model is a health promotion planning model that has not been utilized for the clinical diagnosis of AIS. The purpose of this research is to study AIS in the school age population using the PP model and its relevance for community, school, and clinical health promotion. Methods MEDLINE was utilized to locate AIS data. Studies were screened for relevance and applicability under the auspices of the PP model. Where data was unavailable, expert opinion was utilized based on consensus. Results The social assessment of quality of life is limited with few studies approaching the long-term effects of AIS. Epidemiologically, AIS is the most common form of scoliosis and leading orthopedic problem in children. Behavioral/environmental studies focus on discovering etiologic relationships yet this data is confounded because AIS is not a behavioral. Illness and parenting health behaviors can be appreciated. The educational diagnosis is confounded because AIS is an orthopedic disorder and not behavioral. The administration/policy diagnosis is hindered in that scoliosis screening programs are not considered cost-effective. Policies are determined in some schools because 26 states mandate school scoliosis screening. There exists potential error with the Adam's test. The most widely used measure in the PP model, the Health Belief Model, has not been utilized in any AIS research. Conclusion The PP model is a useful tool for a comprehensive study of a particular health concern. This research showed where gaps in AIS research exist suggesting that there may be problems to the implementation of school screening. Until research disparities are filled, implementation of AIS screening by school, community, and clinical health promotion will be compromised. Lack of data and perceived importance by

  16. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  17. Utility of local health registers in measuring perinatal mortality: a case study in rural Indonesia.

    Science.gov (United States)

    Burke, Leona; Suswardany, Dwi Linna; Michener, Keryl; Mazurki, Setiawaty; Adair, Timothy; Elmiyati, Catur; Rao, Chalapati

    2011-03-17

    Perinatal mortality is an important indicator of obstetric and newborn care services. Although the vast majority of global perinatal mortality is estimated to occur in developing countries, there is a critical paucity of reliable data at the local level to inform health policy, plan health care services, and monitor their impact. This paper explores the utility of information from village health registers to measure perinatal mortality at the sub district level in a rural area of Indonesia. A retrospective pregnancy cohort for 2007 was constructed by triangulating data from antenatal care, birth, and newborn care registers in a sample of villages in three rural sub districts in Central Java, Indonesia. For each pregnancy, birth outcome and first week survival were traced and recorded from the different registers, as available. Additional local death records were consulted to verify perinatal mortality, or identify deaths not recorded in the health registers. Analyses were performed to assess data quality from registers, and measure perinatal mortality rates. Qualitative research was conducted to explore knowledge and practices of village midwives in register maintenance and reporting of perinatal mortality. Field activities were conducted in 23 villages, covering a total of 1759 deliveries that occurred in 2007. Perinatal mortality outcomes were 23 stillbirths and 15 early neonatal deaths, resulting in a perinatal mortality rate of 21.6 per 1000 live births in 2007. Stillbirth rates for the study population were about four times the rates reported in the routine Maternal and Child Health program information system. Inadequate awareness and supervision, and alternate workload were cited by local midwives as factors resulting in inconsistent data reporting. Local maternal and child health registers are a useful source of information on perinatal mortality in rural Indonesia. Suitable training, supervision, and quality control, in conjunction with computerisation to

  18. Utilizing The Synergy of Airborne Backscatter Lidar and In-Situ Measurements for Evaluating CALIPSO

    Directory of Open Access Journals (Sweden)

    Tsekeri Alexandra

    2016-01-01

    Full Text Available Airborne campaigns dedicated to satellite validation are crucial for the effective global aerosol monitoring. CALIPSO is currently the only active remote sensing satellite mission, acquiring the vertical profiles of the aerosol backscatter and extinction coefficients. Here we present a method for CALIPSO evaluation from combining lidar and in-situ airborne measurements. The limitations of the method have to do mainly with the in-situ instrumentation capabilities and the hydration modelling. We also discuss the future implementation of our method in the ICE-D campaign (Cape Verde, August 2015.

  19. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  20. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  1. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  2. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  3. Improvement Design of Biochip Towards High Stable Bioparticle Detection Utilizing Dielectrophoresis Impedance Measurement

    Institute of Scientific and Technical Information of China (English)

    Huang Haibo; Qian Cheng; Li Xiangpeng; Chen Liguo; Xu Wenkui; Zheng Liang; Sun Lining

    2015-01-01

    Dielectrophoresis impedance measurement (DEPIM ) is a powerful tool for bioparticle detection due to its advantages of high efficiency ,label-free and low costs .However ,the strong electric field may decrease the viabili-ty of the bioparticle ,thus leading to instability of impedance measurement .A new design of biochip is presented with high stable bioparticle detection capabilities by using both negative dielectrophoresis (nDEP) and traveling wave dielectrophoresis (twDEP) .In the biochip ,a spiral electrode is arranged on the top of channel ,while a detec-tor is arranged on the bottom of the channel .The influence factors on the DEP force and twDEP force are investi-gated by using the basic principle of DEP ,based on which ,the relationship between Clausius-Mossotti (CM) fac-tor and the frequency of electric field is obtained .The two-dimensional model of the biochip is built by using Com-sol Multiphysics .Electric potential distribution ,force distribution and particle trajectory in the channel are then obtained by using the simulation model .Finally ,both the simulations and experiments are performed to demon-strate that the new biochip can enhance the detection efficiency and reduce the negative effects of electric field on the bioparticles .

  4. Utility of WHOQOL-BREF in measuring quality of life in sickle cell disease

    National Research Council Canada - National Science Library

    Asnani, Monika R; Lipps, Garth E; Reid, Marvin E

    2009-01-01

    .... We have sought to study its utility in this disease population. 491 patients with sickle cell disease were administered the questionnaire including demographics, WHOQOL-Bref, Short Form-36 (SF-36...

  5. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  6. Estimating health state utility values from discrete choice experiments--a QALY space model approach.

    Science.gov (United States)

    Gu, Yuanyuan; Norman, Richard; Viney, Rosalie

    2014-09-01

    Using discrete choice experiments (DCEs) to estimate health state utility values has become an important alternative to the conventional methods of Time Trade-Off and Standard Gamble. Studies using DCEs have typically used the conditional logit to estimate the underlying utility function. The conditional logit is known for several limitations. In this paper, we propose two types of models based on the mixed logit: one using preference space and the other using quality-adjusted life year (QALY) space, a concept adapted from the willingness-to-pay literature. These methods are applied to a dataset collected using the EQ-5D. The results showcase the advantages of using QALY space and demonstrate that the preferred QALY space model provides lower estimates of the utility values than the conditional logit, with the divergence increasing with worsening health states. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  8. Comprehensive benefit of flood resources utilization through dynamic successive fuzzy evaluation model: A case study

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Taking the flood resources utilization in Baicheng, Jilin during 2002–2007 as the research background, and based on the entropy weight and multi-level & multi-objective fuzzy optimization theory, this research established a multi-level & semi-constructive index system and dynamic successive evaluation model for comprehensive benefit evaluation of regional flood resources utilization. With the year 2002 as the base year, the analyzing results showed that there existed a close positive correlation between flood utilization volume and its benefits, comprehensive evaluation value and its comparison increment. Within the six successive evaluation years, the comprehensive benefit of 2003 was the best, in which the benefit evaluation increment reached 82.8% whereas the year of 2004 was the worst, in which the increment was only 18.2%. Thus the sustainability and correctness of the evaluation were verified by six years successive evaluation and increment comparison. The analyzing results showed that the economic benefits, ecological benefits and social benefits of flood utilization were remarkable, and that the comprehensive benefit could be improved by increasing flood utilization capacity, which would promote the regional sustainable development as well. The established dynamic successive evaluation provides a stable theoretical basis and technical support for further flood utilization.

  9. Clinical inferences and decisions--III. Utility assessment and the Bayesian decision model.

    Science.gov (United States)

    Aspinall, P A; Hill, A R

    1984-01-01

    It is accepted that errors of misclassifications, however small, can occur in clinical decisions but it cannot be assumed that the importance associated with false positive errors is the same as that for false negatives. The relative importance of these two types of error is frequently implied by a decision maker in the different weighting factors or utilities he assigns to the alternative consequences of his decisions. Formal procedures are available by which it is possible to make explicit in numerical form the value or worth of the outcome of a decision process. The two principal methods are described for generating utilities as associated with clinical decisions. The concept and application of utility is then expanded from a unidimensional to a multidimensional problem where, for example, one variable may be state of health and another monetary assets. When combined with the principles of subjective probability and test criterion selection outlined in Parts I and II of this series, the consequent use of utilities completes the framework upon which the general Bayesian model of clinical decision making is based. The five main stages in this general decision making model are described and applications of the model are illustrated with clinical examples from the field of ophthalmology. These include examples for unidimensional and multidimensional problems which are worked through in detail to illustrate both the principles and methodology involved in a rationalized normative model of clinical decision making behaviour.

  10. A Family Therapy Model For Preserving Independence in Older Persons: Utilization of the Family of Procreation.

    Science.gov (United States)

    Quinn, William H.; Keller, James F.

    1981-01-01

    Presents a family therapy model that utilizes the Bowen theory systems framework. The framework is adapted to the family of procreation, which takes on increased importance in the lives of the elderly. Family therapy with the aged can create more satisfying intergenerational relationships and preserve independence. (Author)

  11. Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event

    Science.gov (United States)

    Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy

    2011-01-01

    This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…

  12. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature.

  13. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  14. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  15. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  16. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  17. Utilization of a silkworm model for understanding host-pathogen interactions

    Directory of Open Access Journals (Sweden)

    C Kaito

    2012-10-01

    Full Text Available Studies of the interactions between humans and pathogenic microorganisms require adequate representative animal infection models. Further, the availability of invertebrate models overcomes the ethical and financial issues of studying vertebrate materials. Insects have an innate immune system that is conserved in mammals. The recent utilization of silkworms as an animal infection model led to the identification of novel virulence genes of human pathogenic microorganisms and novel innate immune factors in the silkworm. The silkworm infection model is effective for identifying and evaluating novel factors involved in host-pathogen interactions.

  18. Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-06-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  19. Good reliability and validity for a new utility instrument measuring the birth experience, the Labor and Delivery Index

    NARCIS (Netherlands)

    Gärtner, F.R.; Miranda, E. de; Rijnders, M.E.; Freeman, L.M.; Middeldorp, J.M.; Bloemenkamp, K.W.M.; Stiggelbout, A.M.; Akker-van Marle, M.E. van den

    2015-01-01

    Objectives To validate the Labor and Delivery Index (LADY-X), a new delivery-specific utility measure. Study Design and Setting In a test–retest design, women were surveyed online, 6 to 8 weeks postpartum and again 1 to 2 weeks later. For reliability testing, we assessed the standard error of

  20. Hot water use of a utility building tested by measurements; Warmwaterverbruik utiliteitsbouw getoetst met metingen

    Energy Technology Data Exchange (ETDEWEB)

    Pieterse-Quirijns, E.J.; Beverloo, H.; Blokker, E.J.M. [KWR Watercycle Research Institute, Nieuwegein (Netherlands)

    2011-09-15

    In the Netherlands, several guidelines exist to design indoor water mains and hot water installations. They often lead to larger dimensions with possible negative consequences for energy and hygiene. Improved values for the required design parameters can be derived from realistic daily water demand patterns. Simdeum is a simulation model for modelling the water use of various types of both residential buildings and non-residential buildings. This paper shows, that the simulated daily patterns of cold and hot water of various standardised buildings correlate well with measured patterns on a per second base. Simdeum provides insight in the hot water use of several buildings. Thus, the simulated patterns form a solid basis for new design rules. [Dutch] Voor de dimensionering van waterleidinginstallaties en de keuze van warmwaterinstallaties bestaan verschillende richtlijnen. Deze leiden vaak tot overdimensionering met mogelijk negatieve energetische en hygienische gevolgen. Ontwerpkentallen kunnen beter afgeleid worden uit realistische afnamepatronen van het waterverbruik over een dag. Het simulatiemodel Simdeum kan het waterverbruik voor verschillende woningtypen en verschillende typologieen in de utiliteitsbouw modelleren. Dit is vergeleken met metingen op secondebasis. Simdeum geeft inzicht in het warmwaterverbruik van verschillende gebouwen. Hierdoor vormen de gesimuleerde patronen een zeer betrouwbare basis voor nieuwe ontwerprichtlijnen.

  1. Turbulent transport measurements in a model of GT-combustor

    Science.gov (United States)

    Chikishev, L. M.; Gobyzov, O. A.; Sharaborin, D. K.; Lobasov, A. S.; Dulin, V. M.; Markovich, D. M.; Tsatiashvili, V. V.

    2016-10-01

    To reduce NOx formation modern industrial power gas-turbines utilizes lean premixed combustion of natural gas. The uniform distribution of local fuel/air ratio in the combustion chamber plays one of the key roles in the field of lean combustion to prevent thermo-acoustic pulsations. Present paper reports on simultaneous Particle Image Velocimetry and acetone Planar Laser Induced Fluorescence measurements in a cold model of GT-combustor to investigate mixing processes which are relevant to the organization of lean premixed combustion. Velocity and passive admixture pulsations correlations were measured to verify gradient closer model, which is often used in Reynolds-Averaged Navier-Stokes (RANS) simulation of turbulent mixing.

  2. Measurement of Laser Weld Temperatures for 3D Model Input.

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl; GROSSETETE, GRANT; Maccallum, Danny O.

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  3. Diffusion and Sedimentation Interaction Parameters for Measuring the Second Virial Coefficient and Their Utility as Predictors of Protein Aggregation

    OpenAIRE

    Saluja, Atul; Fesinmeyer, R. Matthew; Hogan, Sabine; Brems, David N.; Gokarn, Yatin R

    2010-01-01

    The concentration-dependence of the diffusion and sedimentation coefficients (kD and ks, respectively) of a protein can be used to determine the second virial coefficient (B2), a parameter valuable in predicting protein-protein interactions. Accurate measurement of B2 under physiologically and pharmaceutically relevant conditions, however, requires independent measurement of kD and ks via orthogonal techniques. We demonstrate this by utilizing sedimentation velocity (SV) and dynamic light sca...

  4. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  5. A novel murine model of Fusarium solani keratitis utilizing fluorescent labeled fungi.

    Science.gov (United States)

    Zhang, Hongmin; Wang, Liya; Li, Zhijie; Liu, Susu; Xie, Yanting; He, Siyu; Deng, Xianming; Yang, Biao; Liu, Hui; Chen, Guoming; Zhao, Huiwen; Zhang, Junjie

    2013-05-01

    Fungal keratitis is a common disease that causes blindness. An effective animal model for fungal keratitis is essential for advancing research on this disease. Our objective is to develop a novel mouse model of Fusarium solani keratitis through the inoculation of fluorescent-labeled fungi into the cornea to facilitate the accurate and early identification and screening of fungal infections. F. solani was used as the model fungus in this study. In in vitro experiment, the effects of Calcofluor White (CFW) staining concentration and duration on the fluorescence intensity of F. solani were determined through the mean fluorescence intensity (MFI); the effects of CFW staining on the growth of F. solani were determined by the colony diameter. In in vivo experiment, the F. solani keratitis mice were induced and divided into a CFW-unlabeled and CFW-labeled groups. The positive rate, corneal lesion score and several positive rate determination methods were measured. The MFIs of F. solani in the 30 μg/ml CFW-30 min, 90 μg/ml CFW-10 min and 90 μg/ml CFW-30 min groups were higher than that in the 10 μg/ml CFW-10 min group (P  0.05). No significant differences (P > 0.05) were observed for the positive rate or the corneal lesion scores between the CFW-unlabeled and the CFW-labeled group. On day 1 and 2, the positive rates of the infected corneas in the scraping group were lower than those in the fluorescence microscopy group (P  0.05). Thus, these experiments established a novel murine model of F. solani keratitis utilizing fluorescent labeled fungi. This model facilitates the accurate identification and screening of fungal infections during the early stages of fungal keratitis and provides a novel and reliable technology to study the fungal keratitis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. THE ROLE OF TECHNICAL CONSUMPTION CALCULATION MODELS ON ACCOUNTING INFORMATION SYSTEMS OF PUBLIC UTILITIES SERVICES OPERATORS

    Directory of Open Access Journals (Sweden)

    GHEORGHE CLAUDIU FEIES

    2012-05-01

    Full Text Available After studying how the operators’ management works, an influence of the specific activities of public utilities on their financial accounting system can be noticed. The asymmetry of these systems is also present, resulting from organization and specific services, which implies a close link between the financial accounting system and the specialized technical department. The research methodology consists in observing specific activities of public utility operators and their influence on information system and analysis views presented in the context of published work in some journals. It analyses the impact of technical computing models used by public utility community services on the financial statements and therefore the information provided by accounting information system stakeholders.

  7. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms

    Science.gov (United States)

    1990-09-01

    0 Technical Report 911 D~i. FiLE COPY Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative Computer-Aided Threat...63007A 793 1202 HI 11. TITLE (Include Security Classification) Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative...SECURITY CLASSIFICATION OF THIS PAGE("wn Data Entered) ii Technical Report 911 Computer Simulation Modeling : A Method for Predicting the Utilities of

  8. Utilizing Precision Teaching To Measure Growth of Reading Comprehension Skills in Low Achieving Students.

    Science.gov (United States)

    Nitti, Joanne M.

    A practicum addressed the problem of reading comprehension skills in low achieving students by monitoring their progress utilizing precision teaching. Based on referrals from classroom teachers, guidance counselors, and parents, five students ranging in ability levels from kindergarten through grade 8 were accepted into the program for one or more…

  9. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  10. Measurement and modeling of unsaturated hydraulic conductivity

    Science.gov (United States)

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  11. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition.

  12. Psychometric Measurement Models and Artificial Neural Networks

    Science.gov (United States)

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  13. Measurements and Information in Spin Foam Models

    CERN Document Server

    Garcia-Islas, J Manuel

    2012-01-01

    We present a problem relating measurements and information theory in spin foam models. In the three dimensional case of quantum gravity we can compute probabilities of spin network graphs and study the behaviour of the Shannon entropy associated to the corresponding information. We present a general definition, compute the Shannon entropy of some examples, and find some interesting inequalities.

  14. A Computer Simulation Modeling Approach to Estimating Utility in Several Air Force Specialties

    Science.gov (United States)

    1992-05-01

    AL-TR-1992-0006 AD-A252 322 /II" A COMPUTER SIMULATION MODELING A APPROACH TO ESTIMATING UTILITY IN R SEVERAL AIR FORCE SPECIALTIES M Brice M. Stone...I 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED IU 1Q::l.n1 Umrjh 1100 4. TITLE AND SUBTITLE S. FUNDING NUMBERS A Computer Simulation Modeling Approach...I DTIC TAB 0 Unannounced 0 justificatlon- By Distribut On . Availability Codes Avai an /r Dist Special v A COMPUTER SIMULATION MODELING APPROACH TO

  15. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...... of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed...... the thermodynamic model used in this Ph.D. project. A review of alternative activity coefficient models an earlier work on scale formation is provided. A guideline to the parameter estimation procedure and the number of parameters estimated in the present work are also described. The prediction of solid...

  16. Federal and State Structures to Support Financing Utility-Scale Solar Projects and the Business Models Designed to Utilize Them

    Energy Technology Data Exchange (ETDEWEB)

    Mendelsohn, M.; Kreycik, C.

    2012-04-01

    Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).

  17. Modelling EuroQol health-related utility values for diabetic complications from CODE-2 data.

    Science.gov (United States)

    Bagust, Adrian; Beale, Sophie

    2005-03-01

    Recent research has employed different analytical techniques to estimate the impact of the various long-term complications of type 2 diabetes on health-related utility and health status. However, limited patient numbers or lack of variety of patient experience has limited their power to discriminate between separate complications and grades of severity. In this study alternative statistical model forms were compared to investigate the influence of various factors on self-assessed health status and calculated utility scores, including the presence and severity of complications, and type of diabetes therapy. Responses to the EuroQol EQ-5D questionnaire from 4641 patients with type 2 diabetes in 5 European countries were analysed. Simple multiple regression analysis was used to model both visual analogue scale (VAS) scores and time trade-off index scores (TTO). Also, two complex models were developed for TTO analysis using a structure suggested by the EuroQol calculation algorithm. Both VAS and TTO models achieved greater explanatory power than in earlier studies. Relative weightings for individual complications differed between VAS and TTO scales, reflecting the strong influence of loss of mobility and severe pain in the EuroQol algorithm. Insulin-based therapy was uniformly associated with a detrimental effect equivalent to an additional moderate complication. Evidence was found that TTO values are not responsive in cases where 3 or more multiple complications are present, and therefore may underestimate utility loss for patients most adversely affected by complex chronic diseases like diabetes.

  18. Laser heating of a cavity versus a plane surface for metal targets utilizing photothermal deflection measurements

    Science.gov (United States)

    Jeong, S. H.; Greif, R.; Russo, R. E.

    1996-08-01

    The effects of a cylindrical cavity in a metal surface on the energy coupling of a laser beam with the solid were investigated by using a photothermal deflection technique. The photothermal deflection of a probe beam over the cavity was measured while the bottom of the cavity was heated with a Nd-YAG laser with a wavelength of 1064 nm. Cavities in three different materials and with two different aspect ratios were used for the experiment. Temperature distributions in the solid and the surrounding air were computed numerically and used to calculate photothermal deflections for cavity heating and for plane surface heating. Reflection of the heating laser beam inside the cavity increased the photothermal deflection amplitude significantly with larger increases for materials with larger thermal diffusivity. The computed photothermal deflections agreed more closely with the experimental results when reflection of the heating laser beam inside the cavity was included in the numerical model. The overall energy coupling between a heating laser and a solid is enhanced by a cavity.

  19. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  20. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    Science.gov (United States)

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  1. Utilizing neural networks in magnetic media modeling and field computation: A review

    Directory of Open Access Journals (Sweden)

    Amr A. Adly

    2014-11-01

    Full Text Available Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  2. Utilizing neural networks in magnetic media modeling and field computation: A review.

    Science.gov (United States)

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2014-11-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  3. Prenatal care utilization in New York City: comparison of measures and assessment of their significance for urban health.

    Science.gov (United States)

    Perloff, J D; Jaffee, K D

    1997-01-01

    This paper considers policy and programmatic consequences of shifting measurement of prenatal care utilization from the Kessner Index (KI) to the Adequacy of Prenatal Care Utilization Index (APNCUI). In gauging the adequacy of prenatal care utilization, the KI considers the timing of prenatal care initiation and the number of prenatal visits. The APNCUI also considers both timing of initiation and number of visits, but the approach taken to conceptualizing and measuring these two aspects of prenatal care utilization is more refined. We used birth certificates to calculate the KI and the APNGUI for 217,183 New York City (NYC) births in 1991-1992. We used cross-tabulations and bivariate odds ratios to compare the classifications resulting from the respective indexes. The APNCUI detected some important dimensions of the problem of inadequate prenatal care use that are not evident when using the KI. The proportion of births with inadequate use increases from 18% with the KI to 35% with the APNGUI. Groups of women at elevated risk for inadequate use are the same, but the KI understates significantly the risk for Hispanic women, teens, women who are less well educated, and those on WIC and Medicaid. The APNGUI yields a fuller picture of the degree to which some urban women are at risk for inadequate prenatal care use. Use of the APNGUI in quality assurance, monitoring, and research is recommended.

  4. Piloting Utility Modeling Applications (PUMA): Planning for Climate Change at the Portland Water Bureau

    Science.gov (United States)

    Heyn, K.; Campbell, E.

    2016-12-01

    The Portland Water Bureau has been studying the anticipated effects of climate change on its primary surface water source, the Bull Run Watershed, since the early 2000's. Early efforts by the bureau were almost exclusively reliant on outside expertise from climate modelers and researchers, particularly those at the Climate Impacts Group (CIG) at the University of Washington. Early work products from CIG formed the basis of the bureau's understanding of the most likely and consequential impacts to the watershed from continued GHG-caused warming. However, by mid-decade, as key supply and demand conditions for the bureau changed, it found it lacked the technical capacity and tools to conduct more refined and updated research to build on the outside analysis it had obtained. Beginning in 2010 through its participation in the Pilot Utility Modeling Applications (PUMA) project, the bureau identified and began working to address the holes in its technical and institutional capacity by embarking on a process to assess and select a hydrologic model while obtaining downscaled climate change data to utilize within it. Parallel to the development of these technical elements, the bureau made investments in qualified staff to lead the model selection, development and utilization, while working to establish productive, collegial and collaborative relationships with key climate research staff at the Oregon Climate Change Research Institute (OCCRI), the University of Washington and the University of Idaho. This presentation describes the learning process of a major metropolitan area drinking water utility as its approach to addressing the complex problem of climate change evolves, matures, and begins to influence broader aspects of the organization's planning efforts.

  5. Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy

    DEFF Research Database (Denmark)

    Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R

    2013-01-01

    Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....

  6. Evaluating the Impact of the Healthy Beginnings System of Care Model on Pediatric Emergency Department Utilization.

    Science.gov (United States)

    Tan, Cheryl H; Gazmararian, Julie

    2017-03-01

    The aim of this study was to evaluate whether enrollment in the Healthy Beginnings System of Care (SOC) model is associated with a decrease in emergency department (ED) visits among children aged 6 months to 5.5 years. A retrospective, longitudinal study of ED utilization was conducted among children enrolled in the Healthy Beginnings SOC model between February 2011 and May 2013. Using medical records obtained from a children's hospital in Atlanta, the rate of ED visits per quarter was examined as the main outcome. A multilevel, multivariate Poisson model, with family- and child-level random effects, compared ED utilization rates before and after enrollment. Adjusted rate ratios and 95% confidence intervals were calculated after controlling for sociodemographic confounders. The effect of SOC enrollment on the rate of ED visits differed by income level of the primary parent. The rate of ED visits after enrollment was not significantly different than the rate of ED visits before enrollment for children whose primary parent had an annual income of less than $5000 (P = 0.298), $20,000 to $29,999 (P = 0.199), or $30,000 or more (P = 0.117). However, for the children whose primary parent's annual income was $5000 to $19,999, the rate of ED visits after enrollment was significantly higher than the rate of ED visits before enrollment (adjusted rate ratio, 1.48; 95% confidence interval, 1.17-1.87). Enrollment in the SOC model does not appear to decrease the rate of ED visits among enrolled children. Additional strategies, such as education sessions on ED utilization, are needed to reduce the rate of ED utilization among SOC-enrolled children.

  7. Computer software requirements specification for the world model light duty utility arm system

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.E.

    1996-02-01

    This Computer Software Requirements Specification defines the software requirements for the world model of the Light Duty Utility Arm (LDUA) System. It is intended to be used to guide the design of the application software, to be a basis for assessing the application software design, and to establish what is to be tested in the finished application software product. (This deploys end effectors into underground storage tanks by means of robotic arm on end of telescoping mast.)

  8. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  9. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  10. P Voltage Control of DFIG with Two-Mass-Shaft Turbine Model Under Utility Voltage Disturbance

    Directory of Open Access Journals (Sweden)

    Hengameh Kojooyan Jafari

    2016-06-01

    Full Text Available Doubly fed induction generators as a variable speed induction generators are applied instead of other electric machines in wind power plants to be connected to the grid with flexible controllers. Nowadays one of the most important subjects in wind farms is control of output power delivered to the grid under utility disturbance. In this paper, a doubly-fed induction generator with external rotor resistance and power converters model as an external voltage source having an adjustable phase and amplitude with an ordinary turbine connected to one mass shaft model and also two mass shaft model, is used and controlled by a P voltage controller to control the output active power for typical high and low wind speeds under two conditions of utility disturbance; while time of disturbance is not too long to change the domain of external rotor voltage source and also while time is long and the domain of external rotor voltage decreases.Simulation results show that P voltage controller can control output active power under 27% stator voltage drop down for typical low wind speed and 11% stator voltage drop down for typical high wind speed in long time disturbance while 80% of rotor external voltage domain drops down under short time utility disturbance.

  11. [Research practices of conversion efficiency of resources utilization model of castoff from Chinese material medica industrialization].

    Science.gov (United States)

    Duan, Jin-Ao; Su, Shu-Lan; Guo, Sheng; Liu, Pei; Qian, Da-Wei; Jiang, Shu; Zhu, Hua-Xu; Tang, Yu-Ping; Wu, Qi-Nan

    2013-12-01

    The industrialization chains and their products, which were formed from the process of the production of medicinal materials-prepared drug in pieces and deep processed product of Chinese material medica (CMM) resources, have generated large benefits of social and economic. However, The large of herb-medicine castoff of "non-medicinal parts" and "rejected materials" produced inevitably during the process of Chinese medicinal resources produce and process, and the residues, waste water and waste gas were produced during the manufactured and deep processed product of CMM. These lead to the waste of resources and environmental pollution. Our previous researches had proposed the "three utilization strategies" and "three types of resources models" of herb-medicine castoff according to the different physicochemical property of resources constitutes, resources potential and utility value of herb-medicine castoff. This article focus on the conversion efficiency of resources model and analysis the ways, technologies, practices, and application in herb-medicine cast off of the conversion efficiency of resources model based on the recycling economy theory of resources and thoughts of resources chemistry of CMM. These data may be promote and resolve the key problems limited the industrialization of Chinese material medica for long time and promote the realization of herb-medicine castoff resources utilization.

  12. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  13. Flavor release measurement from gum model system

    DEFF Research Database (Denmark)

    Ovejero-López, I.; Haahr, Anne-Mette; van den Berg, Frans W.J.

    2004-01-01

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio...... composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory...

  14. FIM measurement properties and Rasch model details.

    Science.gov (United States)

    Wright, B D; Linacre, J M; Smith, R M; Heinemann, A W; Granger, C V

    1997-12-01

    To summarize, we take issue with the criticisms of Dickson & Köhler for two main reasons: 1. Rasch analysis provides a model from which to approach the analysis of the FIM, an ordinal scale, as an interval scale. The existence of examples of items or individuals which do not fit the model does not disprove the overall efficacy of the model; and 2. the principal components analysis of FIM motor items as presented by Dickson & Köhler tends to undermine rather than support their argument. Their own analyses produce a single major factor explaining between 58.5 and 67.1% of the variance, depending upon the sample, with secondary factors explaining much less variance. Finally, analysis of item response, or latent trait, is a powerful method for understanding the meaning of a measure. However, it presumes that item scores are accurate. Another concern is that Dickson & Köhler do not address the issue of reliability of scoring the FIM items on which they report, a critical point in comparing results. The Uniform Data System for Medical Rehabilitation (UDSMRSM) expends extensive effort in the training of clinicians of subscribing facilities to score items accurately. This is followed up with a credentialing process. Phase 1 involves the testing of individual clinicians who are submitting data to determine if they have achieved mastery over the use of the FIM instrument. Phase 2 involves examining the data for outlying values. When Dickson & Köhler investigate more carefully the application of the Rasch model to their FIM data, they will discover that the results presented in their paper support rather than contradict their application of the Rasch model! This paper is typical of supposed refutations of Rasch model applications. Dickson & Köhler will find that idiosyncrasies in their data and misunderstandings of the Rasch model are the only basis for a claim to have disproven the relevance of the model to FIM data. The Rasch model is a mathematical theorem (like

  15. Business model innovation for Local Energy Management: a perspective from Swiss utilities

    Directory of Open Access Journals (Sweden)

    Emanuele Facchinetti

    2016-08-01

    Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.

  16. Measurement of the thyroid's iodine absorption utilizing minimal /sup 131/I dose

    Energy Technology Data Exchange (ETDEWEB)

    Paz A, B.; Villegas A, J.; Delgado B, C. (Universidad Nacional San Agustin de Arequipa (Peru). Departamento de Bioquimica)

    1981-03-01

    We utilize a minimal dose of /sup 131/I thus limiting the contact of the thyroid tissues with the isotopic materials to determine the absorption of /sup 131/I by the thyroid from 6 to 24 hours in 90 pupils of the locality of Arequipa. The average rate of absorption in 6 and 24 hours in the case considered are of 24.15% and 35.42% respectively, with a standard deviation of 6.93% and 9.61%. No significant differences were reported from the results of those of adults and our own results in all the probes which were undertaken.

  17. Utility and limitations of measures of health inequities: a theoretical perspective.

    Science.gov (United States)

    Alonge, Olakunle; Peters, David H

    2015-01-01

    This paper examines common approaches for quantifying health inequities and assesses the extent to which they incorporate key theories necessary for explicating the definition of health inequity. The first theoretical analysis examined the distinction between inter-individual and inter-group health inequalities as measures of health inequities. The second analysis considered the notion of fairness in health inequalities from different philosophical perspectives. To understand the extent to which different measures of health inequities incorporate these theoretical explanations, four criteria were used to assess each measure: 1) Does the indicator demonstrate inter-group or inter-individual health inequalities or both; 2) Does it reflect health inequalities in relation to socioeconomic position; 3) Is it sensitive to the absolute transfer of health (outcomes, services, or both) or income/wealth between groups; 4) Could it be used to capture inequalities in relation to other population groupings (other than socioeconomic status)? The measures assessed include: before and after measures within only the disadvantaged population, range, Gini coefficient, Pseudo-Gini coefficient, index of dissimilarity, concentration index, slope and relative indices of inequality, and regression techniques. None of these measures satisfied all the four criteria, except the range. Whereas each measure quantifies a different perspective in health inequities, using a measure within only the disadvantaged population does not measure health inequities in a meaningful way, even using before and after changes. For a more complete assessment of how programs affect health inequities, it may be useful to use more than one measure.

  18. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  19. Transonic Cascade Measurements to Support Analytical Modeling

    Science.gov (United States)

    2007-11-02

    RECEIVED JUL 0 12005 FINAL REPORT FOR: AFOSR GRANT F49260-02-1-0284 TRANSONIC CASCADE MEASUREMENTS TO SUPPORT ANALYTICAL MODELING Paul A. Durbin ...PAD); 650-723-1971 (JKE) durbin @vk.stanford.edu; eaton@vk.stanford.edu submitted to: Attn: Dr. John Schmisseur Air Force Office of Scientific Research...both spline and control points for subsequent wall shape definitions. An algebraic grid generator was used to generate the grid for the blade-wall

  20. Capacitor Voltages Measurement and Balancing in Flying Capacitor Multilevel Converters Utilizing a Single Voltage Sensor

    DEFF Research Database (Denmark)

    Farivar, Glen; Ghias, Amer M. Y. M.; Hredzak, Branislav

    2017-01-01

    This paper proposes a new method for measuring capacitor voltages in multilevel flying capacitor (FC) converters that requires only one voltage sensor per phase leg. Multiple dc voltage sensors traditionally used to measure the capacitor voltages are replaced with a single voltage sensor at the ac...... side of the phase leg. The proposed method is subsequently used to balance the capacitor voltages using only the measured ac voltage. The operation of the proposed measurement and balancing method is independent of the number of the converter levels. Experimental results presented for a five-level FC...

  1. The Utility of Remotely-Sensed Land Surface Temperature from Multiple Platforms For Testing Distributed Hydrologic Models over Complex Terrain

    Science.gov (United States)

    Xiang, T.; Vivoni, E. R.; Gochis, D. J.

    2011-12-01

    Land surface temperature (LST) is a key parameter in watershed energy and water budgets that is relatively unexplored as a validation metric for distributed hydrologic models. Ground-based or remotely-sensed LST datasets can provide insights into a model's ability in reproducing water and energy fluxes across a large range of terrain, vegetation, soil and meteorological conditions. As a result, spatiotemporal LST observations can serve as a strong constraint for distributed simulations and can augment other available in-situ data. LST fields are particular useful in mountainous areas where temperature varies with terrain properties and time-variable surface conditions. In this study, we collect and process remotely-sensed fields from several satellite platforms - Landsat 5/7, MODIS and ASTER - to capture spatiotemporal LST dynamics at multiple resolutions and with frequent repeat visits. We focus our analysis of these fields over the Sierra Los Locos basin (~100 km2) in Sonora, Mexico, for a period encompassing the Soil Moisture Experiment in 2004 and the North American Monsoon Experiment (SMEX04-NAME). Satellite observations are verified using a limited set of ground data from manual sampling at 30 locations and continuous measurements at 2 sites. First, we utilize the remotely-sensed fields to understand the summer seasonal evolution of LST in the basin in response to the arrival of summer storms and the vigorous ecosystem greening organized along elevation bands. Then, we utilize the ground and remote-sensing datasets to test the distributed predictions of the TIN-based Real-time Integrated Basin Simulator (tRIBS) under conditions accounting static and dynamic vegetation patterns. Basin-averaged and distributed comparisons are carried out for two different terrain products (INEGI aerial photogrammetry and ASTER stereo processing) used to derive the distributed model domain. Results from the comparisons are discussed in light of the utility of remotely-sensed LST

  2. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  3. Heat Loss Measurements in Buildings Utilizing a U-value Meter

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    the best basis for upgrading the energy performance, it is important to measure the heat losses at different locations on a building facade, in order to optimize the energy performance. The author has invented a U-value meter, enabling measurements of heat transfer coefficients. The meter has been used...

  4. A Review of Acculturation Measures and Their Utility in Studies Promoting Latino Health

    Science.gov (United States)

    Wallace, Phyllis M.; Pomery, Elizabeth A.; Latimer, Amy E.; Martinez, Josefa L.; Salovey, Peter

    2010-01-01

    The authors reviewed the acculturation literature with the goal of identifying measures used to assess acculturation in Hispanic populations in the context of studies of health knowledge, attitudes, and behavior change. Twenty-six acculturation measures were identified and summarized. As the Hispanic population continues to grow in the United…

  5. The clinical utility of reticular basement membrane thickness measurements in asthmatic children

    NARCIS (Netherlands)

    van Mastrigt, Esther; Vanlaeken, Leonie; Heida, Fardou; Caudri, Daan; de Jongste, Johan C.; Timens, Wim; Rottier, Bart L.; de Krijger, Ronald R.; Pijnenburg, Marielle W.

    2015-01-01

    Objective: Reticular basement membrane (RBM) thickness is one of the pathological features of asthma and can be measured in endobronchial biopsies. We assessed the feasibility of endobronchial biopsies in a routine clinical setting and investigated the clinical value of RBM thickness measurements fo

  6. Information Utility: Quantifying the Total Psychometric Information Provided by a Measure

    Science.gov (United States)

    Markon, Kristian E.

    2013-01-01

    Although advances have improved our ability to describe the measurement precision of a test, it often remains challenging to summarize how well a test is performing overall. Reliability, for example, provides an overall summary of measurement precision, but it is sample-specific and might not reflect the potential usefulness of a test if the…

  7. A Review of Acculturation Measures and Their Utility in Studies Promoting Latino Health

    Science.gov (United States)

    Wallace, Phyllis M.; Pomery, Elizabeth A.; Latimer, Amy E.; Martinez, Josefa L.; Salovey, Peter

    2010-01-01

    The authors reviewed the acculturation literature with the goal of identifying measures used to assess acculturation in Hispanic populations in the context of studies of health knowledge, attitudes, and behavior change. Twenty-six acculturation measures were identified and summarized. As the Hispanic population continues to grow in the United…

  8. Modeling the hydrologic and economic efficacy of stormwater utility credit programs for US single family residences.

    Science.gov (United States)

    Kertesz, Ruben; Green, Olivia Odom; Shuster, William D

    2014-01-01

    As regulatory pressure to reduce the environmental impact of urban stormwater intensifies, US municipalities increasingly seek a dedicated source of funding for stormwater programs, such as a stormwater utility. In rare instances, single family residences are eligible for utility discounts for installing green infrastructure. This study examined the hydrologic and economic efficacy of four such programs at the parcel scale: Cleveland (OH), Portland (OR), Fort Myers (FL), and Lynchburg (VA). Simulations were performed to model the reduction in stormwater runoff by implementing bioretention on a typical residential property according to extant administrative rules. The EPA National Stormwater Calculator was used to perform pre- vs post-retrofit comparisons and to demonstrate its ease of use for possible use by other cities in utility planning. Although surface slope, soil type and infiltration rate, impervious area, and bioretention parameters were different across cities, our results suggest that modeled runoff volume was most sensitive to percent of total impervious area that drained to the bioretention cell, with soil type the next most important factor. Findings also indicate a persistent gap between the percentage of annual runoff reduced and the percentage of fee reduced.

  9. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  10. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  11. Research on Water Utility Revenue Model and Compensation Policy under Uncertain Demand

    Directory of Open Access Journals (Sweden)

    Shou-Kui He

    2014-03-01

    Full Text Available With the diversification of both water utility investment and property right structure, it is necessary to establish a scientific compensation mechanism of water conservancy benefit to balance the interests among investors, water users and pertinent sectors which suffer loss. This paper analyzes the compensation policies water management authority imposed on water supply enterprises under uncertain demand, establishes a compensation model with risk preference, explains the implications of risk preference on the decision-making behaviors of water supply enterprises by using numerical analysis method, provides the basis for the water management department to formulate reasonable water resources charge standards and compensation policies. At last, the paper discusses how to implement the water compensation policies according to the characteristics of rural water utilities.

  12. Energy Utilization Evaluation of Carbon Performance in Public Projects by FAHP and Cloud Model

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-07-01

    Full Text Available With the low-carbon economy advocated all over the world, how to use energy reasonably and efficiently in public projects has become a major issue. It has brought many open questions, including which method is more reasonable in evaluating the energy utilization of carbon performance in public projects when the evaluation information is fuzzy; whether an indicator system can be constructed; and which indicators have more impact on carbon performance. This article aims to solve these problems. We propose a new carbon performance evaluation system for energy utilization based on project processes (design, construction, and operation. Fuzzy Analytic Hierarchy Process (FAHP is used to accumulate the indicator weights and cloud model is incorporated when the indicator value is fuzzy. Finally, we apply our indicator system to a case study of the Xiangjiang River project in China, which demonstrates the applicability and efficiency of our method.

  13. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones....

  14. Comparison of advanced Arctic Ocean model sea ice fields to satellite derived measurements

    OpenAIRE

    Dimitriou, David S.

    1998-01-01

    Approved for public release; distribution is unlimited Numerical models have proven integral to the study of climate dynamics. Sea ice models are critical to the improvement of general circulation models used to study the global climate. The object of this study is to evaluate a high resolution ice-ocean coupled model by comparing it to derived measurements from SMMR and SSM/I satellite observations. Utilized for this study was the NASA Goddard Space Flight (GSFC) Sea Ice Concentration Dat...

  15. A fault tolerant model for multi-sensor measurement

    Directory of Open Access Journals (Sweden)

    Li Liang

    2015-06-01

    Full Text Available Multi-sensor systems are very powerful in the complex environments. The cointegration theory and the vector error correction model, the statistic methods which widely applied in economic analysis, are utilized to create a fitting model for homogeneous sensors measurements. An algorithm is applied to implement the model for error correction, in which the signal of any sensor can be estimated from those of others. The model divides a signal series into two parts, the training part and the estimated part. By comparing the estimated part with the actual one, the proposed method can identify a sensor with possible faults and repair its signal. With a small amount of training data, the right parameters for the model in real time could be found by the algorithm. When applied in data analysis for aero engine testing, the model works well. Therefore, it is not only an effective method to detect any sensor failure or abnormality, but also a useful approach to correct possible errors.

  16. A single-item measure of social identification: reliability, validity, and utility.

    Science.gov (United States)

    Postmes, Tom; Haslam, S Alexander; Jans, Lise

    2013-12-01

    This paper introduces a single-item social identification measure (SISI) that involves rating one's agreement with the statement 'I identify with my group (or category)' followed by a 7-point scale. Three studies provide evidence of the validity (convergent, divergent, and test-retest) of SISI with a broad range of social groups. Overall, the estimated reliability of SISI is good. To address the broader issue of single-item measure reliability, a meta-analysis of 16 widely used single-item measures is reported. The reliability of single-item scales ranges from low to reasonably high. Compared with this field, reliability of the SISI is high. In general, short measures struggle to achieve acceptable reliability because the constructs they assess are broad and heterogeneous. In the case of social identification, however, the construct appears to be sufficiently homogeneous to be adequately operationalized with a single item.

  17. In-silico ADME models: a general assessment of their utility in drug discovery applications.

    Science.gov (United States)

    Gleeson, M Paul; Hersey, Anne; Hannongbua, Supa

    2011-01-01

    ADME prediction is an extremely challenging area as many of the properties we try to predict are a result of multiple physiological processes. In this review we consider how in-silico predictions of ADME processes can be used to help bias medicinal chemistry into more ideal areas of property space, minimizing the number of compounds needed to be synthesized to obtain the required biochemical/physico-chemical profile. While such models are not sufficiently accurate to act as a replacement for in-vivo or in-vitro methods, in-silico methods nevertheless can help us to understand the underlying physico-chemical dependencies of the different ADME properties, and thus can give us inspiration on how to optimize them. Many global in-silico ADME models (i.e generated on large, diverse datasets) have been reported in the literature. In this paper we selectively review representatives from each distinct class and discuss their relative utility in drug discovery. For each ADME parameter, we limit our discussion to the most recent, most predictive or most insightful examples in the literature to highlight the current state of the art. In each case we briefly summarize the different types of models available for each parameter (i.e simple rules, physico-chemical and 3D based QSAR predictions), their overall accuracy and the underlying SAR. We also discuss the utility of the models as related to lead generation and optimization phases of discovery research.

  18. CONCEPTUAL PAPER : Utilization of GPS Satellites for Precise Irradiation Measurement and Monitoring

    Indian Academy of Sciences (India)

    S. Vijayan

    2008-03-01

    Precise measurement of irradiance over the earth under various circumstances like solar flares, coronal mass ejections, over an 11-year solar cycle, etc. leads to better understanding of Sun–earth relationship. To continuously monitor the irradiance over earth-space regions several satellites at several positions are required. For that continuous and multiple satellite monitoring we can use GPS (Global Positioning System) satellites (like GLONASS, GALILEO, future satellites) installed with irradiance measuring and monitoring instruments. GPS satellite system consists of 24 constellations of satellites. Therefore usage of all the satellites leads to 24 measurements of irradiance at the top of the atmosphere (or 12 measurements of those satellites which are pointing towards the Sun) at an instant. Therefore in one day, numerous irradiance observations can be obtained for the whole globe, which will be very helpful for several applications like Albedo calculation, Earth Radiation Budget calculation, monitoring of near earth-space atmosphere, etc. Moreover, measuring irradiance both in ground (using ground instruments) and in space at the same instant of time over a same place, leads to numerous advantages. That is, for a single position we obtain irradiance at the top of the atmosphere, irradiance at ground and the difference in irradiance from over top of the atmosphere to the ground. Measurement of irradiance over the atmosphere and in ground at a precise location gives more fine details about the solar irradiance influence over the earth, path loss and interaction of irradiance with the atmosphere.

  19. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-08-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive facets including conceptual and procedural elements. In the first part of the study, data were collected from 1,483 students attending eight secondary schools in England, through the use of a newly devised Secondary Self-Concept Science Instrument, and structural equation modeling was employed to test and validate a model. In the second part of the study, the data were analysed within the new self-concept framework to examine learners' ASC profiles across the domains of science, with particular attention paid to age- and gender-related differences. The study found that the proposed science self-concept model exhibited robust measures of fit and construct validity, which were shown to be invariant across gender and age subgroups. The self-concept profiles were heterogeneous in nature with the component relating to self-concept in physics, being surprisingly positive in comparison to other aspects of science. This outcome is in stark contrast to data reported elsewhere and raises important issues about the nature of young learners' self-conceptions about science. The paper concludes with an analysis of the potential utility of the self-concept measurement instrument as a pedagogical device for science educators and learners of science.

  20. Process improvement and cost reduction utilizing a fully automated CD SEM for thin film head pole 2 resist measurements

    Science.gov (United States)

    Knutrud, Paul C.; Newcomb, Robert M.

    1996-05-01

    Thin film head (TFH) manufacturers are constantly striving to improve process control, eliminate scrap material and reduce the total cost of manufacturing their devices. Successful measurement and control of the Pole 2 Resist structure is a critical component of the TFH process which directly impacts disk drive performance, reliability and final product cost. Until recently, white light optical metrology systems have been the only option for measuring the Pole 2 structures. However, recent advances in TFH process technology have resulted in aspect ratios up to 10:1 which has limited the ability of the white light optical metrology systems. IVS has developed a unique metrology solution to image and measure these high aspect ratio structures utilizing the IVS-200TM CD SEM. This technology provides state of the art measurement performance for repeatability and stability which in turn has provided manufacturers with the ability to monitor the Pole 2 process and reap both technical and financial benefits.

  1. Assimilation of measurement data in hydrodynamic modeling

    Science.gov (United States)

    Karamuz, Emilia; Romanowicz, Renata J.

    2016-04-01

    This study focuses on developing methods to combine ground-based data from operational monitoring with data from satellite imaging to obtain a more accurate evaluation of flood inundation extents. The distributed flow model MIKE 11 was used to determine the flooding areas for a flood event with available satellite data. Model conditioning was based on the integrated use of data from remote measurement techniques and traditional data from gauging stations. Such conditioning of the model improves the quality of fit of the model results. The use of high resolution satellite images (from IKONOS, QuickBird e.t.c) and LiDAR Digital Elevation Model (DEM) allows information on water levels to be extended to practically any chosen cross-section of the tested section of the river. This approach allows for a better assessment of inundation extent, particularly in areas with a scarce network of gauging stations. We apply approximate Bayesian analysis to integrate the information on flood extent originating from different sources. The approach described above was applied to the Middle River Vistula reach, from the Zawichost to Warsaw gauging stations. For this part of the river the detailed geometry of the river bed and floodplain data were available. Finally, three selected sub-sections were analyzed with the most suitable satellite images of inundation area. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.

  2. Balancing model complexity and measurements in hydrology

    Science.gov (United States)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  3. DTI measures track and predict motor function outcomes in stroke rehabilitation utilizing BCI technology.

    Science.gov (United States)

    Song, Jie; Nair, Veena A; Young, Brittany M; Walton, Leo M; Nigogosyan, Zack; Remsik, Alexander; Tyler, Mitchell E; Farrar-Edwards, Dorothy; Caldera, Kristin E; Sattin, Justin A; Williams, Justin C; Prabhakaran, Vivek

    2015-01-01

    Tracking and predicting motor outcomes is important in determining effective stroke rehabilitation strategies. Diffusion tensor imaging (DTI) allows for evaluation of the underlying structural integrity of brain white matter tracts and may serve as a potential biomarker for tracking and predicting motor recovery. In this study, we examined the longitudinal relationship between DTI measures of the posterior limb of the internal capsule (PLIC) and upper-limb motor outcomes in 13 stroke patients (median 20-month post-stroke) who completed up to 15 sessions of intervention using brain-computer interface (BCI) technology. Patients' upper-limb motor outcomes and PLIC DTI measures including fractional anisotropy (FA), axial diffusivity (AD), radial diffusivity (RD), and mean diffusivity (MD) were assessed longitudinally at four time points: pre-, mid-, immediately post- and 1-month-post intervention. DTI measures and ratios of each DTI measure comparing the ipsilesional and contralesional PLIC were correlated with patients' motor outcomes to examine the relationship between structural integrity of the PLIC and patients' motor recovery. We found that lower diffusivity and higher FA values of the ipsilesional PLIC were significantly correlated with better upper-limb motor function. Baseline DTI ratios were significantly correlated with motor outcomes measured immediately post and 1-month-post BCI interventions. A few patients achieved improvements in motor recovery meeting the minimum clinically important difference (MCID). These findings suggest that upper-limb motor recovery in stroke patients receiving BCI interventions relates to the microstructural status of the PLIC. Lower diffusivity and higher FA measures of the ipsilesional PLIC contribute toward better motor recovery in the stroke-affected upper-limb. DTI-derived measures may be a clinically useful biomarker in tracking and predicting motor recovery in stroke patients receiving BCI interventions.

  4. Exposure to electromagnetic fields from smart utility meters in GB; part I) laboratory measurements.

    Science.gov (United States)

    Peyman, Azadeh; Addison, Darren; Mee, Terry; Goiceanu, Cristian; Maslanyj, Myron; Mann, Simon

    2017-05-01

    Laboratory measurements of electric fields have been carried out around examples of smart meter devices used in Great Britain. The aim was to quantify exposure of people to radiofrequency signals emitted from smart meter devices operating at 2.4 GHz, and then to compare this with international (ICNIRP) health-related guidelines and with exposures from other telecommunication sources such as mobile phones and Wi-Fi devices. The angular distribution of the electric fields from a sample of 39 smart meter devices was measured in a controlled laboratory environment. The angular direction where the power density was greatest was identified and the equivalent isotropically radiated power was determined in the same direction. Finally, measurements were carried out as a function of distance at the angles where maximum field strengths were recorded around each device. The maximum equivalent power density measured during transmission around smart meter devices at 0.5 m and beyond was 15 mWm(-2) , with an estimation of maximum duty factor of only 1%. One outlier device had a maximum power density of 91 mWm(-2) . All power density measurements reported in this study were well below the 10 W m(-2) ICNIRP reference level for the general public. Bioelectromagnetics. 2017;38:280-294. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc.

  5. The Clinical Utility of a Low Serum Ceruloplasmin Measurement in the Diagnosis of Wilson Disease.

    Science.gov (United States)

    Kelly, D; Crotty, G; O'Mullane, J; Stapleton, M; Sweeney, B; O'Sullivan, S S

    2016-01-01

    The first step in screening for potential Wilson disease is serum ceruloplasmin testing, whereby a level of less than 0.2g/L is suggestive of the disease. We aimed to determine what proportion of an Irish population had a low ceruloplasmin level, whether low measurements were appropriately followed-up and what were the clinical outcomes. We conducted a retrospective review of all serum ceruloplasmin measurements between August 2003 and October 2009 in a large tertiary referral centre in Southern Ireland. Clinical data, serum ceruloplasmin, liver function tests, urinary copper and liver biopsy reports were all recorded where available. 1573 patients had a serum ceruloplasmin measurement during the 7-year study period. 96 patients (6.1%) had a ceruloplasmin level Wilson disease. There was only 1 new diagnosis. Only 27 patients (28.1%) had some form of confirmatory testing performed. In our centre's experience, the positive predictive value of a significantly low ceruloplasmin level is 11.1% (95% CI 2.91-30.3%). In practice a low serum ceruloplasmin measurement is often not followed by appropriate confirmatory testing. Measuring serum ceruloplasmin as a singular diagnostic test for Wilson disease or as part of the battery of unselected liver screening tests is inappropriate and low-yield.

  6. Spot size measurement of flash-radiography source utilizing the pinhole imaging method

    CERN Document Server

    Wang, Yi; Chen, Nan; Cheng, Jinming; Xie, Yutong; Liu, Yulong; Long, Quanhong

    2015-01-01

    The spot size of the x-ray source is a key parameter of a flash-radiography facility, which is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam is tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.

  7. Energy reduction in buildings in temperate and tropic regions utilizing a heat loss measuring device

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    2012-01-01

    to ACMV in the "warm countries" contribute to an enormous energy consumption and corresponding CO2 emission. In order to establish the best basis for energy renovation, it is important to have measures of the heat losses on a building façade, for optimizing the energy renovation. This paper will present...... penetration through facades with the aim to reduce the costs to AC. The paper will introduce a common project between NUS (National University of Singapore), AAU (Aalborg University) and HT-Meter, the latter as the U-value Meter developer company. In the project we will measure the heat transfer in the unit W...... of the project. Furthermore this paper present results from already conducted heat loss measurements in the temperate regions....

  8. Forecasting Paratransit Utility by Using Multinomial Logit Model: A Case Study

    Directory of Open Access Journals (Sweden)

    Waikhom Victory

    2016-10-01

    Full Text Available Paratransit plays an important role in the urban passenger transportation system in the developing countries. Three cities viz. Imphal East, Imphal West and Silchar in India have been undertaken for the study. Household survey and traffic survey have been employed to collect data for the paratransit users. Modelling techniques and tools also have been used to forecast the utility of paratransit in the region. For this purpose, a Multinomial Logit Model (MNL had been used. A total of seven variables were considered in the model estimation of which three are quantitative i.e. trip length (km, travel cost (rupees and travel time (minutes and four are qualitative variables i.e. reliability, comfort, road condition and convenience.

  9. On the use of prior information in modelling metabolic utilization of energy in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Fernández, José Adalberto

    2011-01-01

    Construction of models that provide a realistic representation of metabolic utilization of energy in growing animals tend to be over-parameterized because data generated from individual metabolic studies are often sparse. In the Bayesian framework prior information can enter the data analysis...... through formal statements of probability because model parameters are random variables and hence, are assigned probability distribution (Gelman et al. 2004). The objective of the study was to introduce prior information in modelling metabolizable energy (ME) intake, protein (PD) and lipid deposition (LD......) curves, resulting from a metabolism study on growing pigs of high genetic potential. A total of 17 crossbred pigs of three genders (barrows, boars and gilts) were used. Pigs were fed four diets based on barley, wheat and soybean meal supplemented with crystalline amino acids to meet Danish nutrient...

  10. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Univ. of Wisconsin, Madison, WI (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  11. From Ambiguity Aversion to a Generalized Expected Utility. Modeling Preferences in a Quantum Probabilistic Framework

    CERN Document Server

    Aerts, Diederik

    2015-01-01

    Ambiguity and ambiguity aversion have been widely studied in decision theory and economics both at a theoretical and an experimental level. After Ellsberg's seminal studies challenging subjective expected utility theory (SEUT), several (mainly normative) approaches have been put forward to reproduce ambiguity aversion and Ellsberg-type preferences. However, Machina and other authors have pointed out some fundamental difficulties of these generalizations of SEUT to cope with some variants of Ellsberg's thought experiments, which has recently been experimentally confirmed. Starting from our quantum modeling approach to human cognition, we develop here a general probabilistic framework to model human decisions under uncertainty. We show that our quantum theoretical model faithfully represents different sets of data collected on both the Ellsberg and the Machina paradox situations, and is flexible enough to describe different subjective attitudes with respect to ambiguity. Our approach opens the way toward a quan...

  12. Measuring and modeling twilight's purple light

    Science.gov (United States)

    Lee, Raymond L.; Hernández-Andrés, Javier

    2003-01-01

    During many clear twilights, much of the solar sky is dominated by pastel purples. This purple light's red component has long been ascribed to transmission through and scattering by stratospheric dust and other aerosols. Clearly the vivid purples of post-volcanic twilights are related to increased stratospheric aerosol loading. Yet our time-series measurements of purple-light spectra, combined with radiative transfer modeling and satellite soundings, indicate that background stratospheric aerosols by themselves do not redden sunlight enough to cause the purple light's reds. Furthermore, scattering and extinction in both the troposphere and the stratosphere are needed to explain most purple lights.

  13. Clinical Utility of the Modified Stroop Task as a Treatment Outcome Measure: Questions Raised

    Science.gov (United States)

    Ball, Jillian R.; Mitchell, Philip B.; Touyz, Stephen W.; Griffiths, Rosalyn A.; Beumont, Pierre J. V.

    2004-01-01

    Data from an outpatient treatment trial for anorexia nervosa were examined to gain preliminary insights as to whether the modified Stroop colour-naming task might offer a useful measure of treatment outcome. It was hypothesised that interference for eating-, weight- and shape-related words on a modified version on the Stroop colour-naming task…

  14. THE GRONINGEN ACTIVITY RESTRICTION SCALE FOR MEASURING DISABILITY - ITS UTILITY IN INTERNATIONAL COMPARISONS

    NARCIS (Netherlands)

    SUURMEIJER, TPBM; DOEGLAS, DM; MOUM, T; BRIANCON, S; KROL, B; SANDERMAN, R; GUILLEMIN, F; BJELLE, A; VAMDENHEUVEL, WJA

    1994-01-01

    Objectives. The Groningen Activity Restriction Scale (GARS) is a non-disease-specific instrument to measure disability in activities of daily living (ADL) and instrumental activities of daily living (IADL). It was developed in studies of Dutch samples consisting of elderly or chronically ill people.

  15. UTILIZING THE PAKS METHOD FOR MEASURING ACROLEIN AND OTHER ALDEHYDES IN DEARS

    Science.gov (United States)

    Acrolein is a hazardous air pollutant of high priority due to its high irritation potency and other potential adverse health effects. However, a reliable method is currently unavailable for measuring airborne acrolein at typical environmental levels. In the Detroit Exposure and A...

  16. A single-item measure of social identification : Reliability, validity, and utility

    NARCIS (Netherlands)

    Postmes, Tom; Haslam, S. Alexander; Jans, Lise

    2013-01-01

    This paper introduces a single-item social identification measure (SISI) that involves rating one's agreement with the statement I identify with my group (or category)' followed by a 7-point scale. Three studies provide evidence of the validity (convergent, divergent, and test-retest) of SISI with a

  17. Measuring the Efficient Utilization of Medical Personnel at Navy Military Treatment Facilities

    Science.gov (United States)

    1990-06-01

    Subject Terms (continue on reverse if necessary and Identify by block number) Field Group Subgroup effectiveness,efficiency,health,hospital,medical, NIHSS ...the study.v, xi I. INTRODUCTION A. PROBLEM Managers of medical treatment facilities (MTFs) lack reliable performance measures that capture, in a...Value Engineering b. Product diversification c. Product simplification d. Research and development e. Product standardization f. Reliability

  18. Analysis of biomarker utility using a PBPK/PD model for carbaryl

    Directory of Open Access Journals (Sweden)

    Martin Blake Phillips

    2014-11-01

    Full Text Available There are many types of biomarkers; the two common ones are biomarkers of exposure and biomarkers of effect. The utility of a biomarker for estimating exposures or predicting risks depends on the strength of the correlation between biomarker concentrations and exposure/effects. In the current study, a combined exposure and physiologically-based pharmacokinetic/pharmacodynamic (PBPK/PD model of carbaryl was used to demonstrate the use of computational modeling for providing insight into the selection of biomarkers for different purposes. The Cumulative and Aggregate Risk Evaluation System (CARES was used to generate exposure profiles, including magnitude and timing, for use as inputs to the PBPK/PD model. The PBPK/PD model was then used to predict blood concentrations of carbaryl and urine concentrations of its principal metabolite, 1-naphthol (1-N, as biomarkers of exposure. The PBPK/PD model also predicted acetylcholinesterase (AChE inhibition in red blood cells (RBC as a biomarker of effect. The correlations of these simulated biomarker concentrations with intake doses or brain AChE inhibition (as a surrogate of effects were analyzed using a linear regression model. Results showed that 1-N in urine is a better biomarker of exposure than carbaryl in blood, and that 1-N in urine is correlated with the dose averaged over the last two days of the simulation. They also showed that RBC AChE inhibition is an appropriate biomarker of effect. This computational approach can be applied to a wide variety of chemicals to facilitate quantitative analysis of biomarker utility.

  19. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  20. Utilizing anisotropic Preisach-type models in the accurate simulation of magnetostriction

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A. [Cairo Univ., Giza (Egypt). Electrical Power and Machines Dept.; Mayergoyz, I.D. [Univ. of Maryland, College Park, MD (United States). Electrical Engineering Dept.; Bergqvist, A. [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Electrical Power Engineering

    1997-09-01

    Magnetostriction models are being widely used in the development of fine positioning and active vibration damping devices. This paper presents a new approach for simulating 1-D magnetostriction using 2-D anisotropic Preisach-type models. In this approach, identification of the model takes into account measured flux density versus field and strain versus field curves for different stress values. Consequently, a more accurate magnetostriction model may be obtained. Details of the identification procedure as well as experimental testing of the proposed model are given.

  1. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  2. Stakeholder Utility: Perspectives on School-Wide Data for Measurement, Feedback, and Evaluation

    Science.gov (United States)

    Upreti, Gita; Liaupsin, Carl; Koonce, Dan

    2010-01-01

    More than 10,000 schools in the United States have adopted the multi-tiered model of behavioral and academic supports known as school-wide positive behavior interventions and supports (PBIS). Schools and districts adopting, implementing, and sustaining PBIS are charged with collecting and disseminating data generated by and related to students,…

  3. Potential clinical utility of a fibre optic-coupled dosemeter for dose measurements in diagnostic radiology.

    Science.gov (United States)

    Jones, A Kyle; Hintenlang, David

    2008-01-01

    Many types of dosemeters have been investigated for absorbed dose measurements in diagnostic radiology, including ionisation chambers, metal-oxide semiconductor field-effect transistor dosemeters, thermoluminescent dosemeters, optically stimulated luminescence detectors, film and diodes. Each of the aforementioned dosemeters suffers from a critical limitation, either the need to interrogate, or read, the dosemeter to retrieve dose information or large size to achieve adequate sensitivity. This work presents an evaluation of a fibre optic-coupled dosemeter (FOCD) for use in diagnostic radiology dose measurement. This dosemeter is small, tissue-equivalent and capable of providing true real-time dose information. The FOCD has been evaluated for dose linearity, angular dependence, sensitivity and energy dependence at energies, beam qualities and beam quantities relevant to diagnostic radiology. The FOCD displayed excellent dose linearity and high sensitivity, while exhibiting minimal angular dependence of response. However, the dosemeter does exhibit positive energy dependence, and is subject to attenuation of response when bent.

  4. Utility of telomere length measurements for age determination of humpback whales

    Directory of Open Access Journals (Sweden)

    Morten Tange Olsen

    2014-12-01

    Full Text Available This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae, ranging from 0 to 26 years of age. The results suggested a significant correlation between telomere length and age in humpback whales. However, telomere length was highly variable among individuals of similar age, suggesting that telomere length measured by quantitative PCR is an imprecise determinant of age in humpback whales. The observed variation in individual telomere length was found to be a function of both experimental and biological variability, with the latter perhaps reflecting patterns of inheritance, resource allocation trade-offs, and stochasticity of the marine environment.

  5. Eddy current nondestructive testing device for measuring variable characteristics of a sample utilizing Walsh functions

    Science.gov (United States)

    Libby, Hugo L.; Hildebrand, Bernard P.

    1978-01-01

    An eddy current testing device for measuring variable characteristics of a sample generates a signal which varies with variations in such characteristics. A signal expander samples at least a portion of this generated signal and expands the sampled signal on a selected basis of square waves or Walsh functions to produce a plurality of signal components representative of the sampled signal. A network combines these components to provide a display of at least one of the characteristics of the sample.

  6. The Utilizing of Agro-climatic Resources and Preventing Measures of Meteorological Disasters in Fushun

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Based on the meteorological data in Fushun,Qingyuan and Xinbing from 1961 to 2008,the status quo of major agro-climatic resources in Fushun were analyzed.The abundant rainfall,sufficient sunshine and rich thermal resources were suitable for the development of modern agricultural production.The specific measures of effective use of climate resources were put forward according to geographical location and climatic characteristics of Fushun.The advantages of agro-climate resources were exerted for large edible...

  7. Measuring midkine: the utility of midkine as a biomarker in cancer and other diseases.

    Science.gov (United States)

    Jones, D R

    2014-06-01

    Midkine (MK) is a pleiotropic growth factor prominently expressed during embryogenesis but down-regulated to neglible levels in healthy adults. Many published studies have demonstrated striking MK overexpression compared with healthy controls in various pathologies, including ischaemia, inflammation, autoimmunity and, most notably, in many cancers. MK expression is detectable in biopsies of diseased, but not healthy, tissues. Significantly, because it is a soluble cytokine, elevated MK is readily apparent in the blood and other body fluids such as urine and CSF, making MK a relatively convenient, accessible, non-invasive and inexpensive biomarker for population screening and early disease detection. The first diagnostic tests that quantify MK are just now receiving regulatory clearance and entering the clinic. This review examines the current state of knowledge pertaining to MK as a biomarker and highlights promising indications and clinical settings where measuring MK could make a difference to patient treatment. I also raise outstanding questions about reported variants of MK as well as MK's bio-distribution in vivo. Answering these questions in future studies will enhance our understanding of the significance of measured MK levels in both patients and healthy subjects, and may reveal further opportunities for measuring MK to diagnose disease. MK has already proven to be a biomarker that can significantly improve detection, management and treatment of cancer, and there is significant promise for developing further MK-based diagnostics in the future. © 2014 The British Pharmacological Society.

  8. Optimizing steam flood performance utilizing a new and highly accurate two phase steam measurement system

    Energy Technology Data Exchange (ETDEWEB)

    Huff, B. D.; Warren, P. B. [CalResources LLC (Canada); Whorff, F. [ITT Barton (Canada)

    1995-11-01

    The development of a two phase steam measurement system was documented. The system consists of a `V` cone differential pressure device and a vortex meter velocity device in series through which the steam flows. Temperature and pressure sensors are electronically interfaced with a data logging system. The design was described as being very simple and rugged, consequently, well suited to monitoring in the field.. Steam quality measurements were made in the Kern River Field and the Coalinga Field thermal projects using a surface steam separator. In steam flood operations, steam cost is very high, hence appropriate distribution of the steam can result in significant cost reduction. This technology allows the measurement of steam flow and quality at any point in the steam distribution system. The metering system`s orifice meter was found to have a total average error of 45%, with 25% of that attributable to `cold leg` problem. Installation of the metering system was expected to result in a steam use reduction of 8%, without any impact on production. Steam re-distribution could result in a potential oil production increase of 10%. 12 refs., 8 tabs., 9 figs.

  9. Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies

    Directory of Open Access Journals (Sweden)

    José Pinto Casquilho

    2017-02-01

    Full Text Available The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.

  10. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    Directory of Open Access Journals (Sweden)

    Craig Benjamin M

    2009-01-01

    Full Text Available Abstract Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179. Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91. Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through

  11. Measuring Model-Based High School Science Instruction: Development and Application of a Student Survey

    Science.gov (United States)

    Fulmer, Gavin W.; Liang, Ling L.

    2013-02-01

    This study tested a student survey to detect differences in instruction between teachers in a modeling-based science program and comparison group teachers. The Instructional Activities Survey measured teachers' frequency of modeling, inquiry, and lecture instruction. Factor analysis and Rasch modeling identified three subscales, Modeling and Reflecting, Communicating and Relating, and Investigative Inquiry. As predicted, treatment group teachers engaged in modeling and inquiry instruction more than comparison teachers, with effect sizes between 0.55 and 1.25. This study demonstrates the utility of student report data in measuring teachers' classroom practices and in evaluating outcomes of a professional development program.

  12. Development of an innovative spacer grid model utilizing computational fluid dynamics within a subchannel analysis tool

    Science.gov (United States)

    Avramova, Maria

    In the past few decades the need for improved nuclear reactor safety analyses has led to a rapid development of advanced methods for multidimensional thermal-hydraulic analyses. These methods have become progressively more complex in order to account for the many physical phenomena anticipated during steady state and transient Light Water Reactor (LWR) conditions. The advanced thermal-hydraulic subchannel code COBRA-TF (Thurgood, M. J. et al., 1983) is used worldwide for best-estimate evaluations of the nuclear reactor safety margins. In the framework of a joint research project between the Pennsylvania State University (PSU) and AREVA NP GmbH, the theoretical models and numerics of COBRA-TF have been improved. Under the name F-COBRA-TF, the code has been subjected to an extensive verification and validation program and has been applied to variety of LWR steady state and transient simulations. To enable F-COBRA-TF for industrial applications, including safety margins evaluations and design analyses, the code spacer grid models were revised and substantially improved. The state-of-the-art in the modeling of the spacer grid effects on the flow thermal-hydraulic performance in rod bundles employs numerical experiments performed by computational fluid dynamics (CFD) calculations. Because of the involved computational cost, the CFD codes cannot be yet used for full bundle predictions, but their capabilities can be utilized for development of more advanced and sophisticated models for subchannel-level analyses. A subchannel code, equipped with improved physical models, can be then a powerful tool for LWR safety and design evaluations. The unique contributions of this PhD research are seen as development, implementation, and qualification of an innovative spacer grid model by utilizing CFD results within a framework of a subchannel analysis code. Usually, the spacer grid models are mostly related to modeling of the entrainment and deposition phenomena and the heat

  13. A Comparative Study of Systems of Utility Model Patent Search Report in Mainland China and Taiwan Region

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    I. An overview In the patent system of the civil law countries, the utility model patent, also known as the "petty invention", is granted to protect petty inventions that are not highly inventive, but very useful. For example, in the patent systems in Germany and Japan1 can be found provisions concerning utility model patent. After the patent system was launched in mainland China on 1 April 1985, the utility model patent is well received by the industry thanks to the adoption of the "preliminary examinat...

  14. Diffusion and sedimentation interaction parameters for measuring the second virial coefficient and their utility as predictors of protein aggregation.

    Science.gov (United States)

    Saluja, Atul; Fesinmeyer, R Matthew; Hogan, Sabine; Brems, David N; Gokarn, Yatin R

    2010-10-20

    The concentration-dependence of the diffusion and sedimentation coefficients (k(D) and k(s), respectively) of a protein can be used to determine the second virial coefficient (B₂), a parameter valuable in predicting protein-protein interactions. Accurate measurement of B₂ under physiologically and pharmaceutically relevant conditions, however, requires independent measurement of k(D) and k(s) via orthogonal techniques. We demonstrate this by utilizing sedimentation velocity (SV) and dynamic light scattering (DLS) to analyze solutions of hen-egg white lysozyme (HEWL) and a monoclonal antibody (mAb1) in different salt solutions. The accuracy of the SV-DLS method was established by comparing measured and literature B₂ values for HEWL. In contrast to the assumptions necessary for determining k(D) and k(s) via SV alone, k(D) and ks were of comparable magnitudes, and solution conditions were noted for both HEWL and mAb1 under which 1), k(D) and k(s) assumed opposite signs; and 2), k(D) ≥k(s). Further, we demonstrate the utility of k(D) and k(s) as qualitative predictors of protein aggregation through agitation and accelerated stability studies. Aggregation of mAb1 correlated well with B₂, k(D), and k(s), thus establishing the potential for k(D) to serve as a high-throughput predictor of protein aggregation.

  15. Simulation of Current Measurement Using Magnetic Sensor Arrays and Its Error Model

    Institute of Scientific and Technical Information of China (English)

    WANGJing; YAOJian-jun; WANGJian-hua

    2004-01-01

    Magnetic sensor arrays are proposed to measure electric current in a non-contac tway. In order to achieve higher accuracy, signal processing techniques for magnetic sensor arrays are utilized. Simulation techniques are necessary to study the factors influencing the accuracy of current measurement. This paper presents a simulation method to estimate the impact of sensing area and position of sensors on the accuracy of current measurement. Several error models are built up to support computer-aided design of magnetic sensor arrays.

  16. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    Science.gov (United States)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  17. INVESTIGATION OF QUANTIFICATION OF FLOOD CONTROL AND WATER UTILIZATION EFFECT OF RAINFALL INFILTRATION FACILITY BY USING WATER BALANCE ANALYSIS MODEL

    OpenAIRE

    文, 勇起; BUN, Yuki

    2013-01-01

    In recent years, many flood damage and drought attributed to urbanization has occurred. At present infiltration facility is suggested for the solution of these problems. Based on this background, the purpose of this study is investigation of quantification of flood control and water utilization effect of rainfall infiltration facility by using water balance analysis model. Key Words : flood control, water utilization , rainfall infiltration facility

  18. Measurement variability and sincerity of effort: clinical utility of isokinetic strength coefficient of variation scores.

    Science.gov (United States)

    Birmingham, T B; Kramer, J F; Speechley, M; Chesworth, B M; MacDermid, J

    1998-06-01

    Although the use of measures of strength variability as a means of judging sincerity of effort is becoming common practice, the accuracy of doing so has been questioned. Coefficient of variation (CV) cut-off points, indicating the upper limit of variability for repeated maximal efforts, are routinely used to identify workers providing submaximal efforts during various strength tests. However, the stability of the CV itself has not been considered when comparing an individual's observed CV score to these cut-off points. The purpose of the present study was to examine the day-to-day variability of the CV calculated from maximal isokinetic knee extension efforts, and to describe how this measurement error affects the accuracy of the CV as a distinguishing criterion between maximal and submaximal efforts. Thirty-one healthy males (mean age 25 +/- 4.5 years) completed three maximal and three submaximal isokinetic knee extension efforts on two separate occasions. Although submaximal CVs were significantly greater than maximal CVs (15.6 versus 3.7%; p < 0.01), there was considerable overlap between submaximal and maximal CV frequency distributions. More importantly, an individual observed CV could vary +/- 3.1% as a result of day-to-day variation or measurement error. This range in possible CV scores should be considered when comparing an individual's score to proposed cut-off points. Since individual CVs vary considerably from day-to-day, and since precise cut-off values distinguishing between maximal and submaximal conditions cannot be identified, CV scores must be interpreted cautiously, and the potential errors in relying extensively on this approach to identifying insincere efforts should be recognised.

  19. Measuring equity in utilization of emergency obstetric care at Wolisso Hospital in Oromiya, Ethiopia: a cross sectional study

    OpenAIRE

    Wilunda, Calistus; Putoto, Giovanni; Manenti, Fabio; Castiglioni, Maria; Azzimonti, Gaetano; Edessa, Wagari; Atzori, Andrea; Merialdi, Mario; Betr?n, Ana Pilar; Vogel, Joshua; Criel, Bart

    2013-01-01

    Introduction Improving equity in access to services for the treatment of complications that arise during pregnancy and childbirth, namely Emergency Obstetric Care (EmOC), is fundamental if maternal and neonatal mortality are to be reduced. Consequently, there is a growing need to monitor equity in access to EmOC. The objective of this study was to develop a simple questionnaire to measure equity in utilization of EmOC at Wolisso Hospital, Ethiopia and compare the wealth status of EmOC users w...

  20. Measuring equity in utilization of emergency obstetric care at Wolisso Hospital in Oromiya, Ethiopia: a cross sectional study

    OpenAIRE

    Wilunda, Calistus; Putoto, Giovanni; Manenti, Fabio; Castiglioni, Maria; Azzimonti, Gaetano; Edessa, Wagari; Atzori, Andrea; Merialdi, Mario; Betrán, Ana Pilar; Vogel, Joshua; Criel, Bart

    2013-01-01

    Introduction Improving equity in access to services for the treatment of complications that arise during pregnancy and childbirth, namely Emergency Obstetric Care (EmOC), is fundamental if maternal and neonatal mortality are to be reduced. Consequently, there is a growing need to monitor equity in access to EmOC. The objective of this study was to develop a simple questionnaire to measure equity in utilization of EmOC at Wolisso Hospital, Ethiopia and compare the wealth status of EmOC users w...

  1. Noiseless Quantum Measurement and Squeezing of Microwave Fields Utilizing Mechanical Vibrations.

    Science.gov (United States)

    Ockeloen-Korppi, C F; Damskägg, E; Pirkkalainen, J-M; Heikkilä, T T; Massel, F; Sillanpää, M A

    2017-03-10

    A process which strongly amplifies both quadrature amplitudes of an oscillatory signal necessarily adds noise. Alternatively, if the information in one quadrature is lost in phase-sensitive amplification, it is possible to completely reconstruct the other quadrature. Here we demonstrate such a nearly perfect phase-sensitive measurement using a cavity optomechanical scheme, characterized by an extremely small noise less than 0.2 quanta. The device also strongly squeezes microwave radiation by 8 dB below vacuum. A source of bright squeezed microwaves opens up applications in manipulations of quantum systems, and noiseless amplification can be used even at modest cryogenic temperatures.

  2. Measurement of ERP Utilization Level of Enterprises: The Sample of Province Aydın

    Directory of Open Access Journals (Sweden)

    Özel Sebetci

    2014-06-01

    Full Text Available The aim of this study is to measure ERP usage level of enterprises in Aydın. Data was obtained from 83 enterprises in Aydın via questionnaires. Data analysis showed that the enterprises had mostly high level of the computer integration and technologies on production. However, it was found that these enterprises did not use ERP systemsat high levels. Correlations between enterprise size by the number of employees, company revenue at 2012 and ERP usage levels were established by chi square test. Correlation analysis showed that there was a significant and positive correlation between ERP characteristics and strategic advantages about ERP.

  3. Measurement of the dynamic viscosity of hybrid engine oil -Cuo-MWCNT nanofluid, development of a practical viscosity correlation and utilizing the artificial neural network

    Science.gov (United States)

    Aghaei, Alireza; Khorasanizadeh, Hossein; Sheikhzadeh, Ghanbar Ali

    2017-07-01

    The main objectives of this study have been measurement of the dynamic viscosity of CuO-MWCNTs/SAE 5w-50 hybrid nanofluid, utilization of artificial neural networks (ANN) and development of a new viscosity model. The new nanofluid has been prepared by a two-stage procedure with volume fractions of 0.05, 0.1, 0.25, 0.5, 0.75 and 1%. Then, utilizing a Brookfield viscometer, its dynamic viscosity has been measured for temperatures of 5, 15, 25, 35, 45, 55 °C. The experimental results demonstrate that the viscosity increases by increasing the nanoparticles volume fraction and decreases by increasing temperature. Based on the experimental data the maximum and minimum nanofluid viscosity enhancements, when the volume fraction increases from 0.05 to 1, are 35.52% and 12.92% for constant temperatures of 55 and 15 °C, respectively. The higher viscosity of oil engine in higher temperatures is an advantage, thus this result is important. The measured nanofluid viscosity magnitudes in various shear rates show that this hybrid nanofluid is Newtonian. An ANN model has been employed to predict the viscosity of the CuO-MWCNTs/SAE 5w-50 hybrid nanofluid and the results showed that the ANN can estimate the viscosity efficiently and accurately. Eventually, for viscosity estimation a new temperature and volume fraction based third-degree polynomial empirical model has been developed. The comparison shows that this model is in good agreement with the experimental data.

  4. Baseline comparison of three health utility measures and the feeling thermometer among participants in the action to control cardiovascular risk in diabetes trial

    Directory of Open Access Journals (Sweden)

    Raisch Dennis W

    2012-07-01

    Full Text Available Abstract Background Health utility (HU measures are used as overall measures of quality of life and to determine quality adjusted life years (QALYs in economic analyses. We compared baseline values of three HUs including Short Form 6 Dimensions (SF-6D, and Health Utilities Index, Mark II and Mark III (HUI2 and HUI3 and the feeling thermometer (FT among type 2 diabetes participants in the Action to Control Cardiovascular Risk in Diabetes (ACCORD trial. We assessed relationships between HU and FT values and patient demographics and clinical variables. Methods ACCORD was a randomized clinical trial to test if intensive controls of glucose, blood pressure and lipids can reduce the risk of major cardiovascular disease (CVD events in type 2 diabetes patients with high risk of CVD. The health-related quality of life (HRQOL sub-study includes 2,053 randomly selected participants. Interclass correlations (ICCs and agreement between measures by quartile were used to evaluate relationships between HU’s and the FT. Multivariable regression models specified relationships between patient variables and each HU and the FT. Results The ICCs were 0.245 for FT/SF-6D, 0.313 for HUI3/SF-6D, 0.437 for HUI2/SF-6D, 0.338 for FT/HUI2, 0.337 for FT/HUI3 and 0.751 for HUI2/HUI3 (P P P  Conclusions The agreements between the different HUs were poor except for the two HUI measures; therefore HU values derived different measures may not be comparable. The FT had low agreement with HUs. The relationships between HUs and demographic and clinical measures demonstrate how severity of diabetes and other clinical and demographic factors are associated with HUs and FT measures. Trial registration ClinicalTrials.gov Identifier: NCT00000620

  5. Method for Non-Intrusively Identifying a Contained Material Utilizing Uncollided Nuclear Transmission Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, John L.; Stephens, Alan G.; Grover Blaine S.

    1999-02-26

    An improved nuclear diagnostic method identifies a contained target material by measuring on-axis, mono-energetic uncollided particle radiation transmitted through a target material for two penetrating radiation beam energies, and applying specially developed algorithms to estimate a ratio of macroscopic neutron cross-sections for the uncollided particle radiation at the two energies, where the penetrating radiation is a neutron beam, or a ratio of linear attenuation coefficients for the uncollided particle radiation at the two energies, where the penetrating radiation is a gamma-ray beam. Alternatively, the measurements are used to derive a minimization formula based on the macroscopic neutron cross-sections for the uncollided particle radiation at the two neutron beam energies, or the linear attenuation coefficients for the uncollided particle radiation at the two gamma-ray beam energies. A candidate target material database, including known macroscopic neutron cross-sections or linear attenuation coefficients for target materials at the selected neutron or gamma-ray beam energies, is used to approximate the estimated ratio or to solve the minimization formula, such that the identity of the contained target material is discovered.

  6. Development of a deep inspiration breath-hold system for radiotherapy utilizing a laser distance measurer.

    Science.gov (United States)

    Jensen, Christer Andre; Skottner, Nils; Frengen, Jomar; Lund, Jo-Åsmund

    2017-01-01

    Deep inspiration breath-hold (DIBH) is a technique for treating left-sided breast cancer (LSBC). In modern radiotherapy, one of the main aims is to exclude the heart from the beam aperture with an individualized beam design for LSBC. A deep inhalation will raise the chest wall while the volume of the lungs increase, this will again push the heart away from the breast to be treated. There are a few commercial DIBH systems, both invasive and noninvasive. We present an alternative noninvasive DIBH system based upon an industrial laser distance measurer. This system can be installed in a treatment room at a low cost; it is very easy to use and requires limited amount of training for the personnel and the patient. The system is capable of measuring the position of the chest wall with high frequency and precision in real time. The patient views its breathing curve through video glasses, and gets instructions during the treatment session. The system is well tolerated by test subjects due to its noninvasiveness. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Religious social capital: Its measurement and utility in the study of the social determinants of health

    Science.gov (United States)

    Maselko, Joanna; Hughes, Cayce; Cheney, Rose

    2014-01-01

    As a social determinant of health, religiosity remains not well understood, despite the prevalence of religious activity and prominence of religious institutions in most societies. This paper introduces a working measure of Religious Social Capital and presents preliminary associations with neighborhood social capital and urban stressors. Religious social capital is defined as the social resources available to individuals and groups through their social connections with a religious community. Domains covered include group membership, social integration, values/norms, bonding/bridging trust as well as social support. Cross-sectional data come from a convenience sample of 104 community dwelling adults residing in a single urban neighborhood in a large US city, who also provided information on neighborhood social capital, and experiences of urban stressors. Results suggest that religious social capital is a valid construct that can be reliably measured. All indicators of religious social capital were higher among those who frequently attended religious services, with the exception of bridging trust (trust of people from different religious groups). A weak, inverse, association was also observed between religious and neighborhood social capital levels. Levels of religious social capital were correlated with higher levels of reported urban stressors, while neighborhood social capital was correlated with lower urban stressor levels. A significant percent of the sample was unaffiliated with a religious tradition and these individuals were more likely to be male, young and more highly educated. Social capital is a promising construct to help elucidate the influence of religion on population health. PMID:21802182

  8. Smith-Purcell experiment utilizing a field-emitter array cathode measurements of radiation

    CERN Document Server

    Ishizuka, H; Yokoo, K; Shimawaki, H; Hosono, A

    2001-01-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mmx25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 mu A up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 mu m period, 30 deg. blaze and a 0.2 mu m thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80 deg. relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 ...

  9. Lack of utility of measuring serum bilirubin concentration in distinguishing perforation status of pediatric appendicitis.

    Science.gov (United States)

    Bonadio, William; Bruno, Santina; Attaway, David; Dharmar, Logesh; Tam, Derek; Homel, Peter

    2017-06-01

    Pediatric appendicitis is a common, potentially serious condition. Determining perforation status is crucial to planning effective management. Determine the efficacy of serum total bilirubin concentration [STBC] in distinguishing perforation status in children with appendicitis. Retrospective review of 257 cases of appendicitis who received abdominal CT scan and measurement of STBC. There were 109 with perforation vs 148 without perforation. Although elevated STBC was significantly more common in those with [36%] vs without perforation [22%], the mean difference in elevated values between groups [0.1mg/dL] was clinically insignificant. Higher degrees of hyperbilirubinemia [>2mg/dL] were rarely encountered [5%]. Predictive values for elevated STBC in distinguishing perforation outcome were imprecise [sensitivity 38.5%, specificity 78.4%, PPV 56.8%, NPV 63.4%]. ROC curve analysis of multiple clinical and other laboratory factors for predicting perforation status was unenhanced by adding the STBC variable. Specific analysis of those with perforated appendicitis and percutaneously-drained intra-abdominal abscess which was culture-positive for Escherichia coli showed an identical rate of STBC elevation compared to all with perforation. The routine measurement of STBC does not accurately distinguish perforation status in children with appendicitis, nor discern infecting organism in those with perforation and intra-abdominal abscess. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Heart rate dynamics in patients with stable angina pectoris and utility of fractal and complexity measures

    Science.gov (United States)

    Makikallio, T. H.; Ristimae, T.; Airaksinen, K. E.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.

    1998-01-01

    Dynamic analysis techniques may uncover abnormalities in heart rate (HR) behavior that are not easily detectable with conventional statistical measures. However, the applicability of these new methods for detecting possible abnormalities in HR behavior in various cardiovascular disorders is not well established. Conventional measures of HR variability were compared with short-term ( 11 beats, alpha2) fractal correlation properties and with approximate entropy of RR interval data in 38 patients with stable angina pectoris without previous myocardial infarction or cardiac medication at the time of the study and 38 age-matched healthy controls. The short- and long-term fractal scaling exponents (alpha1, alpha2) were significantly higher in the coronary patients than in the healthy controls (1.34 +/- 0.15 vs 1.11 +/- 0.12 [p angina pectoris have altered fractal properties and reduced complexity in their RR interval dynamics relative to age-matched healthy subjects. Dynamic analysis may complement traditional analyses in detecting altered HR behavior in patients with stable angina pectoris.

  11. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  12. Optimization of adenosine 5'-triphosphate extraction for the measurement of acidogenic biomass utilizing whey wastewater.

    Science.gov (United States)

    Lee, Changsoo; Kim, Jaai; Hwang, Seokhwan

    2006-08-01

    A set of experiments was carried out to maximize adenosine 5'-triphosphate (ATP) extraction efficiency from acidogenic culture using whey wastewater. ATP concentrations at different microbial concentrations increased linearly as microbial concentration decreased. More than 50% of ATP was extracted from the sample of 39 mg volatile suspended solids (VSS)/l compared to the sample of 2.8 g VSS/l. The ATP concentrations of the corresponding samples were 0.74+/-0.06 and 0.49+/-0.05 mg/l, respectively. For low VSS concentrations ranging from 39 to 92 mg/l, the extracted ATP concentration did not vary significantly at 0.73+/-0.01 mg ATP/l. Response surface methodology with a central composite in cube design for the experiments was used to locate the optimum for maximal ATP extraction with respect to boiling and bead beating treatments. The overall designed intervals were from 0 to 15 min and from 0 to 3 min for boiling and bead beating, respectively. The extracted ATP concentration ranged from 0.01 to 0.74 mg/l within the design boundary. The following is a partial cubic model where eta is the concentration of ATP and x ( k ) is the corresponding variable term (k=boiling time and bead beating time in order): eta=0.629+0.035x (1)-0.818x (2)-0.002x (1) x (2)-0.003x (1) (2) +0.254x (2) (2) +0.002x (1) (2) x (2). This model successfully approximates the response of ATP concentration with respect to the boiling- and bead beating-time. The condition for maximal ATP extraction was 5.6 min boiling without bead beating. The maximal ATP concentration using the model was 0.74 mg/l, which was identical to the experimental value at optimum condition for ATP extraction.

  13. Evapotranspiration Modeling and Measurements at Ecosystem Level

    Science.gov (United States)

    Sirca, C.; Snyder, R. L.; Mereu, S.; Kovács-Láng, E.; Ónodi, G.; Spano, D.

    2012-12-01

    In recent years, the availability of reference evapotranspiration (ETo) data is greatly increased. ETo, in conjunction with coefficients accounting for the difference between the vegetation and the reference surface, provides estimation of the actual evapotranspiration (ETa). The coefficients approach was applied in the past mainly for crops, due the lack of experimental data and difficulties to account for terrain and vegetation variability in natural ecosystems. Moreover, the assessment of ETa over large spatial scale by measurements is often time consuming, and requires several measurement points with relatively expensive and sophisticated instrumentation and techniques (e.g. eddy covariance). The Ecosystem Water Program (ECOWAT) was recently developed to help estimates of ETa of ecosystems by accounting for microclimate, vegetation type, plant density, and water stress. ETa on natural and semi-natural ecosystems has several applications, e.g. water status assessment, fire danger estimation, and ecosystem management practices. In this work, results obtained using ECOWAT to assess ETa of a forest ecosystem located in Hungary are reported. The site is a part of the EU-FP7 INCREASE project, which aims to study the effects of climate change on European shrubland ecosystems. In the site, a climate manipulation experiment was setted up to have a warming and a drought treatment (besides the control). Each treatment was replicated three times We show how the ECOWAT model performed when the predicted actual evapotranspiration is compared with actual evapotranspiration obtained from Surface Renewal method and with soil moisture measurements. ECOWAT was able to capture the differences in the water balance at treatment level, confirming its potential as a tool for water status assessment. For the Surface Renewal method, high frequency temperature data were collected to estimate the sensible heat flux (H'). The net radiation (Rn) and soil heat flux density (G) were also

  14. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into `active` drill hole methods, and `passive` indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial `leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and

  15. Grid-connection of large offshore windfarms utilizing VSC-HVDC: Modeling and grid impact

    DEFF Research Database (Denmark)

    Xue, Yijing; Akhmatov, Vladislav

    2009-01-01

    Utilization of Voltage Source Converter (VSC) – High Voltage Direct Current (HVDC) systems for grid-connection of large offshore windfarms becomes relevant as installed power capacities as well as distances to the connection points of the on-land transmission systems increase. At the same time......, the grid code requirements of the Transmission System Operators (TSO), including ancillary system services and Low-Voltage Fault-Ride-Through (LVFRT) capability of large offshore windfarms, become more demanding. This paper presents a general-level model of and a LVFRT solution for a VSC-HVDC system...... for grid-connection of large offshore windfarms. The VSC-HVDC model is implemented using a general approach of independent control of active and reactive power in normal operation situations. The on-land VSC inverter, which is also called a grid-side inverter, provides voltage support to the transmission...

  16. Utility-Scale Lithium-Ion Storage Cost Projections for Use in Capacity Expansion Models

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J.; Marcy, Cara; Krishnan, Venkat K.; Margolis, Robert

    2016-11-21

    This work presents U.S. utility-scale battery storage cost projections for use in capacity expansion models. We create battery cost projections based on a survey of literature cost projections of battery packs and balance of system costs, with a focus on lithium-ion batteries. Low, mid, and high cost trajectories are created for the overnight capital costs and the operating and maintenance costs. We then demonstrate the impact of these cost projections in the Regional Energy Deployment System (ReEDS) capacity expansion model. We find that under reference scenario conditions, lower battery costs can lead to increased penetration of variable renewable energy, with solar photovoltaics (PV) seeing the largest increase. We also find that additional storage can reduce renewable energy curtailment, although that comes at the expense of additional storage losses.

  17. Utilization of building information modeling in infrastructure’s design and construction

    Science.gov (United States)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  18. Measuring equity in utilization of emergency obstetric care at Wolisso Hospital in Oromiya, Ethiopia: a cross sectional study.

    Science.gov (United States)

    Wilunda, Calistus; Putoto, Giovanni; Manenti, Fabio; Castiglioni, Maria; Azzimonti, Gaetano; Edessa, Wagari; Atzori, Andrea; Merialdi, Mario; Betrán, Ana Pilar; Vogel, Joshua; Criel, Bart

    2013-04-22

    Improving equity in access to services for the treatment of complications that arise during pregnancy and childbirth, namely Emergency Obstetric Care (EmOC), is fundamental if maternal and neonatal mortality are to be reduced. Consequently, there is a growing need to monitor equity in access to EmOC. The objective of this study was to develop a simple questionnaire to measure equity in utilization of EmOC at Wolisso Hospital, Ethiopia and compare the wealth status of EmOC users with women in the general population. Women in the Ethiopia 2005 Demographic and Health Survey (DHS) constituted our reference population. We cross-tabulated DHS wealth variables against wealth quintiles. Five variables that differentiated well across quintiles were selected to create a questionnaire that was administered to women at discharge from the maternity from January to August 2010. This was used to identify inequities in utilization of EmOC by comparison with the reference population. 760 women were surveyed. An a posteriori comparison of these 2010 data to the 2011 DHS dataset, indicated that women using EmOC were wealthier and more likely to be urban dwellers. On a scale from 0 (poorest) to 15 (wealthiest), 31% of women in the 2011 DHS sample scored less than 1 compared with 0.7% in the study population. 70% of women accessing EmOC belonged to the richest quintile with only 4% belonging to the poorest two quintiles. Transportation costs seem to play an important role. We found inequity in utilization of EmOC in favour of the wealthiest. Assessing and monitoring equitable utilization of maternity services is feasible using this simple tool.

  19. Utility of a human-mouse xenograft model and in vivo near-infrared fluorescent imaging for studying wound healing.

    Science.gov (United States)

    Shanmugam, Victoria K; Tassi, Elena; Schmidt, Marcel O; McNish, Sean; Baker, Stephen; Attinger, Christopher; Wang, Hong; Shara, Nawar; Wellstein, Anton

    2015-12-01

    To study the complex cellular interactions involved in wound healing, it is essential to have an animal model that adequately mimics the human wound microenvironment. Currently available murine models are limited because wound contraction introduces bias into wound surface area measurements. The purpose of this study was to demonstrate utility of a human-mouse xenograft model for studying human wound healing. Normal human skin was harvested from elective abdominoplasty surgery, xenografted onto athymic nude (nu/nu) mice, and allowed to engraft for 3 months. The graft was then wounded using a 2-mm punch biopsy. Wounds were harvested on sequential days to allow tissue-based markers of wound healing to be followed sequentially. On the day of wound harvest, mice were injected with XenoLight RediJect cyclooxygenase-2 (COX-2) probe and imaged according to package instructions. Immunohistochemistry confirms that this human-mouse xenograft model is effective for studying human wound healing in vivo. Additionally, in vivo fluorescent imaging for inducible COX-2 demonstrated upregulation from baseline to day 4 (P = 0·03) with return to baseline levels by day 10, paralleling the reepithelialisation of the wound. This human-mouse xenograft model, combined with in vivo fluorescent imaging provides a useful mechanism for studying molecular pathways of human wound healing.

  20. UTILITY OF SHORT-TERM BASEMENT SCREENING RADON MEASUREMENTS TO PREDICT YEAR-LONG RESIDENTIAL RADON CONCENTRATIONS ON UPPER FLOORS.

    Science.gov (United States)

    Barros, Nirmalla; Steck, Daniel J; William Field, R

    2016-11-01

    This study investigated temporal and spatial variability between basement radon concentrations (measured for ∼7 d using electret ion chambers) and basement and upper floor radon concentrations (measured for 1 y using alpha track detectors) in 158 residences in Iowa, USA. Utility of short-term measurements to approximate a person's residential radon exposure and effect of housing/occupant factors on predictive ability were evaluated. About 60 % of basement short-term, 60 % of basement year-long and 30 % of upper floor year-long radon measurements were equal to or above the United States Environmental Protection Agency's radon action level of 148 Bq m(-3) Predictive value of a positive short-term test was 44 % given the year-long living space concentration was equal to or above this action level. Findings from this study indicate that cumulative radon-related exposure was more closely approximated by upper floor year-long measurements than short-term or year-long measurements in the basement. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Directory of Open Access Journals (Sweden)

    Seung-Hwan Yang

    2016-03-01

    Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

  2. Effects of atmospheric variability on energy utilization and conservation. [Space heating energy demand modeling; Program HEATLOAD

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, E.R.; Johnson, G.R.; Somervell, W.L. Jr.; Sparling, E.W.; Dreiseitly, E.; Macdonald, B.C.; McGuirk, J.P.; Starr, A.M.

    1976-11-01

    Research conducted between 1 July 1975 and 31 October 1976 is reported. A ''physical-adaptive'' model of the space-conditioning demand for energy and its response to changes in weather regimes was developed. This model includes parameters pertaining to engineering factors of building construction, to weather-related factors, and to socio-economic factors. Preliminary testing of several components of the model on the city of Greeley, Colorado, yielded most encouraging results. Other components, especially those pertaining to socio-economic factors, are still under development. Expansion of model applications to different types of structures and larger regions is presently underway. A CRT-display model for energy demand within the conterminous United States also has passed preliminary tests. A major effort was expended to obtain disaggregated data on energy use from utility companies throughout the United States. The study of atmospheric variability revealed that the 22- to 26-day vacillation in the potential and kinetic energy modes of the Northern Hemisphere is related to the behavior of the planetary long-waves, and that the midwinter dip in zonal available potential energy is reflected in the development of blocking highs. Attempts to classify weather patterns over the eastern and central United States have proceeded satisfactorily to the point where testing of our method for longer time periods appears desirable.

  3. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.

    2016-11-01

    If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)

  4. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  5. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  6. Measuring dark matter by modeling interacting galaxies

    CERN Document Server

    Petsch, H P; Theis, Ch

    2009-01-01

    The dark matter content of galaxies is usually determined from galaxies in dynamical equilibrium, mainly from rotationally supported galactic components. Such determinations restrict measurements to special regions in galaxies, e.g. the galactic plane(s), whereas other regions are not probed at all. Interacting galaxies offer an alternative, because extended tidal tails often probe outer or off-plane regions of galaxies. However, these systems are neither in dynamical equilibrium nor simple, because they are composed of two or more galaxies, by this increasing the associated parameter space.We present our genetic algorithm based modeling tool which allows to investigate the extended parameter space of interacting galaxies. From these studies, we derive the dynamical history of (well observed) galaxies. Among other parameters we constrain the dark matter content of the involved galaxies. We demonstrate the applicability of this strategy with examples ranging from stellar streams around theMilkyWay to extended ...

  7. Full utilization of silt density index (SDI) measurements for seawater pre-treatment

    KAUST Repository

    Wei, Chunhai

    2012-07-01

    In order to clarify the fouling mechanism during silt density index (SDI) measurements of seawater in the seawater reverse osmosis (SWRO) desalination process, 11 runs were conducted under constant-pressure (207kPa) dead-end filtration mode according to the standard protocol for SDI measurement, in which two kinds of 0.45μm membranes of different material and seawater samples from the Mediterranean including raw seawater and seawater pre-treated by coagulation followed by sand filtration (CSF) and coagulation followed by microfiltration (CMF) technologies were tested. Fouling mechanisms based on the constant-pressure filtration equation were fully analyzed. For all runs, only t/(V/A)∼t showed very good linearity (correlation coefficient R 2>0.99) since the first moment of the filtration, indicating that standard blocking rather than cake filtration was the dominant fouling mechanism during the entire filtration process. The very low concentration of suspended solids rejected by MF of 0.45μm in seawater was the main reason why a cake layer was not formed. High turbidity removal during filtration indicated that organic colloids retained on and/or adsorbed in membrane pores governed the filtration process (i.e., standard blocking) due to the important contribution of organic substances to seawater turbidity in this study. Therefore the standard blocking coefficient k s, i.e., the slope of t/(V/A)∼t, could be used as a good fouling index for seawater because it showed good linearity with feed seawater turbidity. The correlation of SDI with k s and feed seawater quality indicated that SDI could be reliably used for seawater with low fouling potential (SDI 15min<5) like pre-treated seawater in this study. From both k s and SDI, the order of fouling potential was raw seawater>seawater pre-treated by CSF>seawater pre-treated by CMF, indicating the better performance of CMF than CSF. © 2012 Elsevier B.V.

  8. Rates of nutrient utilization in man measured by combined respiratory gas analysis and stable isotopic labelling: effect of food intake.

    Science.gov (United States)

    Garlick, P J; McNurlan, M A; McHardy, K C; Calder, A G; Milne, E; Fearns, L M; Broom, J

    1987-05-01

    Rates of oxygen consumption and carbon dioxide production have been measured in healthy adults during 4 h of fasting followed by 4 h of hourly small meals. Both rates rose to new steady values during feeding, and the respiratory quotient (RQ) increased from 0.792 to 0.924. The RQ was consistent in repeat studies on any individual (coefficient of variation: 2.5 per cent), and differences between individuals were significant in the fasted but not the fed state. Simultaneous measurements were made of the rate of protein oxidation by primed constant infusion of (1-13C)leucine for 8 h. Rates were calculated from the enrichment of plasma alpha-ketoisocaproate and the production of 13CO2 in the breath, taking account of the incomplete recovery of 13CO2 and the changes in baseline enrichment resulting from natural 13C in the food. Leucine oxidation increased by 87 per cent during the feeding period. Rates of nutrient utilization were calculated from respiratory gas exchange and rates of protein oxidation. These showed that fat was predominant in the fasted state, contributing 61 per cent of total energy expenditure, compared with 27 per cent for carbohydrate and 11 per cent for protein. On feeding there was a switch to carbohydrate as the main fuel (62 per cent), with smaller contributions from fat (20 per cent) and protein (18 per cent). During feeding total utilization of each nutrient exceeded its intake from the diet, indicating storage in the body. Dietary carbohydrate was stored without conversion to fat. It is concluded that this method is useful for studying the control of nutrient utilization by food intake.

  9. Utilizing Land:Water Isopleths for Storm Surge Model Development in Coastal Louisiana

    Science.gov (United States)

    Siverd, C. G.; Hagen, S. C.; Bilskie, M. V.; Braud, D.; Peele, H.; Twilley, R.

    2016-12-01

    In the Mississippi River Delta (MRD) Land:Water (L:W) isopleths (Gagliano et al., 1970, 1971) can be used to better understand coastal flood risk from hurricanes than simple estimates of land loss (Twilley et al., 2016). The major goal of this study is to develop a methodology that utilizes L:W isopleths to simplify a detailed present day storm surge model of coastal Louisiana. A secondary goal is to represent marsh fragmentation via L:W isopleths for modeling (for example) storm surge. Isopleths of L:W were derived for the year 2010 and include 1%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 99% (1% being mostly water and 99% being mostly land). Thirty-six models were developed via permutations of two isopleths selected with no repetition between 1% and 99%. The selected two isopleths result in three polygons which represent "open water/transition", "marsh", and "land". The ADvaced CIRCulation (ADCIRC) code (Luettich and Westerink, 2006) was used to perform storm surge simulations. Hydrologic basins, specifically Hydrologic Unit Code 12 (HUC12s), were used to quantify the water surface elevation, depth, volume, area and retention time across south Louisiana for each storm simulation and to provide a basin by basin comparison for the detailed model vs. simplified model results. This methodology aids in identifying the simplified model that most closely resembles the detailed model. It can also be used to develop comparable storm surge models for historical eras prior to the advent of modern remote sensing technology for the purpose of storm surge analysis throughout time.

  10. [Isolation of a methane-utilizing bacterium and its application in measuring methane gas].

    Science.gov (United States)

    Zhao, Gengui; Zheng, Jun; Wen, Guangming; Yang, Suping; Dong, Chuan

    2008-03-01

    A bacterial strain ME16 was isolated from Jinyang Lake in Taiyuan of Shanxi Province, China. Gas chromatography analysis showed that the strain could use methane as the sole carbon and energy source. Based on 16S rDNA sequence analysis, the strain was identified as Pseudomonas aeruginosa. Effects of inoculum size, temperature, methane content and initial pH of media on cell growth were studied. In addition, we examined the response time of methane gas to dissolved oxygen and the relationship between consumption of dissolved oxygen and different methane gas content with PVA-H3BO3 immobilized cell of ME16 using electrochemical method. The optimal conditions for cell growth were 2% inoculum size, 25% methane content, 30 degrees C and pH 6.0. Response time was within 100 s after adding of immobilized cells to reaction system. The linear range of measured methane content was from 0 to 16%, with the correlation coefficient 0.9954. Hence, this strain has potential application in developing of methane biosensor.

  11. Urinary Hypoxanthine as a Measure of Increased ATP Utilization in Late Preterm Infants

    Science.gov (United States)

    Holden, Megan S.; Hopper, Andrew; Slater, Laurel; Asmerom, Yayesh; Esiaba, Ijeoma; Boskovic, Danilo S.; Angeles, Danilyn M.

    2015-01-01

    Objective To examine the effect of neonatal morbidity on ATP breakdown in late preterm infants. Study Design Urinary hypoxanthine concentration, a marker of ATP breakdown, was measured from 82 late preterm infants on days of life (DOL) 3 to 6 using high-performance liquid chromatography. Infants were grouped according to the following diagnoses: poor nippling alone (n = 8), poor nippling plus hyperbilirubinemia (n = 21), poor nippling plus early respiratory disease (n = 26), and respiratory disease alone (n = 27). Results Neonates with respiratory disease alone had significantly higher urinary hypoxanthine over DOL 3 to 6 when compared with neonates with poor nippling (P = .020), poor nippling plus hyperbilirubinemia (P < .001), and poor nippling plus early respiratory disease (P = .017). Neonates with poor nippling who received respiratory support for 2 to 3 days had significantly higher hypoxanthine compared with infants who received respiratory support for 1 day (P = .017) or no days (P = .007). Conclusions These findings suggest that respiratory disorders significantly increase ATP degradation in late premature infants. PMID:26413195

  12. Diurnal Plasma Cortisol Measurements Utility in Differentiating Various Etiologies of Endogenous Cushing Syndrome.

    Science.gov (United States)

    Tirosh, A; Lodish, M B; Papadakis, G Z; Lyssikatos, C; Belyavskaya, E; Stratakis, C A

    2016-09-01

    Cortisol diurnal variation may be abnormal among patients with endogenous Cushing syndrome (CS). The study objective was to compare the plasma cortisol AM/PM ratios between different etiologies of CS. This is a retrospective cohort study, conducted at a clinical research center. Adult patients with CS that underwent adrenalectomy or trans-sphenoidal surgery (n=105) were divided to those with a pathologically confirmed diagnosis of Cushing disease (n=21) and those with primary adrenal CS, including unilateral adrenal adenoma (n=28), adrenocortical hyperplasia (n=45), and primary pigmented nodular adrenocortical disease (PPNAD, n=11). Diurnal plasma cortisol measurements were obtained at 11:30 PM and midnight and at 7:30 and 8:00 AM. The ratios between the mean morning levels and mean late-night levels were calculated. Mean plasma cortisol AM/PM ratio was lower among CD patients compared to those with primary adrenal CS (1.4±0.6 vs. 2.3±1.5, p15 pg/ml) excludes CD with a 85.0% specificity and a negative predictive value (NPV) of 90.9%. Among patients with primary adrenal CS, an AM/PM cortisol≥1.2 had specificity and NPV of 100% for ruling out a diagnosis of PPNAD. Plasma cortisol AM/PM ratios are lower among patients with CD compared with primary adrenal CS, and may aid in the differential diagnosis of endogenous hypercortisolemia. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Fabry-Perot Temperature Sensor for Quasi-Distributed Measurement Utilizing OTDR

    Institute of Scientific and Technical Information of China (English)

    Ping Xu; Fu-Fei Pang; Na Chen; Zhen-Yi Chen; Ting-Yun Wang

    2008-01-01

    A quasi-distributed Fabry-Perot fiber optic temperature sensor array using optical time domain reflectometry (OTDR) technique is presented. The F-P sensor is made by two face to face single-mode optical fibers and their surfaces have been polished. Due to the low reflectivity of the fiber surfaces, the sensor is described as low Fresnel Fabry-Perot interferometer (FPI). The working principle is analyzed using two-beam optical interference approximation. To measure the temperature, a certain temperature sensitive material is filled in the cavity. The slight changes of the reflective intensity which is induced by the refractive index of the material was eaught by OTDR. The length of the cavity is obtained by monitoring the interference spectrum which is used for the setting of the sensor static characteristics within the quasi-linear range. Based on our design, a three point sensor array are fabricated and characterized. The experimental results show that with the temperature increasing from -30℃ to 80℃, the reflectivity increase in a good linear manner. The sensitivity was approximate 0.074 dB℃. For the low transmission loss, more sensors can be integrated.

  14. The utility of endometrial thickness measurement in asymptomatic postmenopausal women with endometrial fluid.

    Science.gov (United States)

    Seckin, B; Ozgu-Erdinc, A S; Dogan, M; Turker, M; Cicek, M N

    2016-01-01

    The aim of this study was to assess the clinical usefulness of sonographic endometrium thickness measurement in asymptomatic postmenopausal women with endometrial fluid collection. Fifty-two asymptomatic postmenopausal women with endometrial fluid, who underwent endometrial sampling were evaluated. Histopathological findings revealed that 25 (48.1%) women had insufficient tissue, 20 (38.4%) had atrophic endometrium and 7 (13.5%) had endometrial polyps. No case of malignancy was found. There was no statistically significant difference between the various histopathological categories (insufficient tissue, atrophic endometrium and polyp) with regard to the mean single-layer endometrial thickness (1.54 ± 0.87, 2.04 ± 1.76 and 1.79 ± 0.69 mm, respectively, p = 0.436). Out of 44 patients with endometrial thickness of less than 3 mm, 38 (86.4%) had atrophic changes or insufficient tissue and 6 (13.6%) had endometrial polyps. In conclusion, if the endometrial thickness is 3 mm or less, endometrial sampling is not necessary in asymptomatic postmenopausal women with endometrial fluid.

  15. Utility of Accelerometers to Measure Physical Activity in Children Attending an Obesity Treatment Intervention

    Directory of Open Access Journals (Sweden)

    Wendy Robertson

    2011-01-01

    Full Text Available Objectives. To investigate the use of accelerometers to monitor change in physical activity in a childhood obesity treatment intervention. Methods. 28 children aged 7–13 taking part in “Families for Health” were asked to wear an accelerometer (Actigraph for 7-days, and complete an accompanying activity diary, at baseline, 3-months and 9-months. Interviews with 12 parents asked about research measurements. Results. Over 90% of children provided 4 days of accelerometer data, and around half of children provided 7 days. Adequately completed diaries were collected from 60% of children. Children partake in a wide range of physical activity which uniaxial monitors may undermonitor (cycling, nonmotorised scootering or overmonitor (trampolining. Two different cutoffs (4 METS or 3200 counts⋅min-1 for minutes spent in moderate and vigorous physical activity (MVPA yielded very different results, although reached the same conclusion regarding a lack of change in MVPA after the intervention. Some children were unwilling to wear accelerometers at school and during sport because they felt they put them at risk of stigma and bullying. Conclusion. Accelerometers are acceptable to a majority of children, although their use at school is problematic for some, but they may underestimate children's physical activity.

  16. Utility of dried blood spots for measurement of cholesterol and triglycerides in a surveillance study.

    Science.gov (United States)

    Lakshmy, Ramakrishnan; Gupta, Ruby; Prabhakaran, Dorairaj; Snehi, Uma; Reddy, K Srinath

    2010-03-01

    Developing countries are facing a rise in noncommunicable diseases (NCD), which is a cause for concern. The World Health Organization has recommended a stepwise approach for NCD risk factor surveillance. Screening for risk factors in remote populations is difficult due to lack of resources and technical expertise, including standardized laboratory facilities. The collection of samples on filter paper for the assessment of risk factors circumvents the need for blood processing, storage, and shipment at ultralow temperatures. Samples were collected on 3-mm Whatman filter paper from one industry (National Thermal Power Corporation) located in the periphery of Delhi as part of a surveillance carried out in industries from different parts of India. Total cholesterol was measured in serum and dried blood by the cholesterol oxidase/p-aminophenazone method and triglycerides by the glycerophosphate oxidase-peroxidase/aminophenazone method. Values obtained by the two methods were compared using Pearson correlation, and Bland-Altman plots were prepared to assess bias. The correlation coefficient "r" was 0.78 for cholesterol and 0.94 for triglycerides between dried blood spots and serum. Bland-Altman plots suggest that differences in values obtained by the two methods were within two standard deviations for most of the samples. Blood samples dried on filter paper can be a successful option for population screening in remote areas, provided preanalytical variations arising due to the method of blood spot preparation and storage are well controlled. (c) 2010 Diabetes Technology Society.

  17. Influence of prepelleting inclusion of whole corn on performance, nutrient utilization, digestive tract measurements, and cecal microbiota of young broilers.

    Science.gov (United States)

    Singh, Y; Ravindran, V; Wester, T J; Molan, A L; Ravindran, G

    2014-12-01

    The objective of the present study was to examine the effects of prepelleting inclusion of graded levels of whole corn on performance, digestive tract measurements, nutrient utilization, and cecal microbiota in broiler starters. Five diets, containing 600 g/kg of ground corn or 150, 300, 450, and 600 g/kg of whole corn replacing (wt/wt) ground corn, were formulated and cold-pelleted at 65°C. Each diet was offered ad libitum to 6 replicates (8 birds per replicate cage) from d 1 to 21 posthatch. The proportion of coarse particles (>1 mm) increased with increasing prepelleting inclusion of whole corn. Pellet quality, measured as pellet durability index, increased (quadratic effect, P feed intake decreased (linear effect, P Feed per gain (quadratic effect, P feed intake. ©2014 Poultry Science Association Inc.

  18. Analytic model utilizing the complex ABCD method for range dependency of a monostatic coherent lidar

    DEFF Research Database (Denmark)

    Olesen, Anders Sig; Pedersen, Anders Tegtmeier; Hanson, Steen Grüner;

    2014-01-01

    In this work, we present an analytic model for analyzing the range and frequency dependency of a monostatic coherent lidar measuring velocities of a diffuse target. The model of the signal power spectrum includes both the contribution from the optical system as well as the contribution from...... the time dependencies of the optical field. A specific coherent Doppler wind lidar system measuring wind velocity in the atmosphere is considered, in which a Gaussian field is transmitted through a simple telescope consisting of a lens and an aperture. The effects of the aperture size, the beam waist...

  19. Measuring adiposity in patients: the utility of body mass index (BMI, percent body fat, and leptin.

    Directory of Open Access Journals (Sweden)

    Nirav R Shah

    Full Text Available BACKGROUND: Obesity is a serious disease that is associated with an increased risk of diabetes, hypertension, heart disease, stroke, and cancer, among other diseases. The United States Centers for Disease Control and Prevention (CDC estimates a 20% obesity rate in the 50 states, with 12 states having rates of over 30%. Currently, the body mass index (BMI is most commonly used to determine adiposity. However, BMI presents as an inaccurate obesity classification method that underestimates the epidemic and contributes to failed treatment. In this study, we examine the effectiveness of precise biomarkers and duel-energy x-ray absorptiometry (DXA to help diagnose and treat obesity. METHODOLOGY/PRINCIPAL FINDINGS: A cross-sectional study of adults with BMI, DXA, fasting leptin and insulin results were measured from 1998-2009. Of the participants, 63% were females, 37% were males, 75% white, with a mean age = 51.4 (SD = 14.2. Mean BMI was 27.3 (SD = 5.9 and mean percent body fat was 31.3% (SD = 9.3. BMI characterized 26% of the subjects as obese, while DXA indicated that 64% of them were obese. 39% of the subjects were classified as non-obese by BMI, but were found to be obese by DXA. BMI misclassified 25% men and 48% women. Meanwhile, a strong relationship was demonstrated between increased leptin and increased body fat. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate the prevalence of false-negative BMIs, increased misclassifications in women of advancing age, and the reliability of gender-specific revised BMI cutoffs. BMI underestimates obesity prevalence, especially in women with high leptin levels (>30 ng/mL. Clinicians can use leptin-revised levels to enhance the accuracy of BMI estimates of percentage body fat when DXA is unavailable.

  20. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and

  1. Flexible simulation framework to couple processes in complex 3D models for subsurface utilization assessment

    Science.gov (United States)

    Kempka, Thomas; Nakaten, Benjamin; De Lucia, Marco; Nakaten, Natalie; Otto, Christopher; Pohl, Maik; Tillner, Elena; Kühn, Michael

    2016-04-01

    Utilization of the geological subsurface for production and storage of hydrocarbons, chemical energy and heat as well as for waste disposal requires the quantification and mitigation of environmental impacts as well as the improvement of georesources utilization in terms of efficiency and sustainability. The development of tools for coupled process simulations is essential to tackle these challenges, since reliable assessments are only feasible by integrative numerical computations. Coupled processes at reservoir to regional scale determine the behaviour of reservoirs, faults and caprocks, generally demanding for complex 3D geological models to be considered besides available monitoring and experimenting data in coupled numerical simulations. We have been developing a flexible numerical simulation framework that provides efficient workflows for integrating the required data and software packages to carry out coupled process simulations considering, e.g., multiphase fluid flow, geomechanics, geochemistry and heat. Simulation results are stored in structured data formats to allow for an integrated 3D visualization and result interpretation as well as data archiving and its provision to collaborators. The main benefits in using the flexible simulation framework are the integration of data geological and grid data from any third party software package as well as data export to generic 3D visualization tools and archiving formats. The coupling of the required process simulators in time and space is feasible, while different spatial dimensions in the coupled simulations can be integrated, e.g., 0D batch with 3D dynamic simulations. User interaction is established via high-level programming languages, while computational efficiency is achieved by using low-level programming languages. We present three case studies on the assessment of geological subsurface utilization based on different process coupling approaches and numerical simulations.

  2. Measurement and modeling of oil slick transport

    Science.gov (United States)

    Jones, Cathleen E.; Dagestad, Knut-Frode; Breivik, Åyvind; Holt, Benjamin; Röhrs, Johannes; Christensen, Kai Hâkon; Espeseth, Martine; Brekke, Camilla; Skrunes, Stine

    2016-10-01

    Transport characteristics of oil slicks are reported from a controlled release experiment conducted in the North Sea in June 2015, during which mineral oil emulsions of different volumetric oil fractions and a look-alike biogenic oil were released and allowed to develop naturally. The experiment used the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) to track slick location, size, and shape for ˜8 h following release. Wind conditions during the exercise were at the high end of the range considered suitable for radar-based slick detection, but the slicks were easily detectable in all images acquired by the low noise, L-band imaging radar. The measurements are used to constrain the entrainment length and representative droplet radii for oil elements in simulations generated using the OpenOil advanced oil drift model. Simultaneously released drifters provide near-surface current estimates for the single biogenic release and one emulsion release, and are used to test model sensitivity to upper ocean currents and mixing. Results of the modeling reveal a distinct difference between the transport of the biogenic oil and the mineral oil emulsion, in particular in the vertical direction, with faster and deeper entrainment of significantly smaller droplets of the biogenic oil. The difference in depth profiles for the two types of oils is substantial, with most of the biogenic oil residing below depths of 10 m, compared to the majority of the emulsion remaining above 10 m depth. This difference was key to fitting the observed evolution of the two different types of slicks.

  3. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement.

    Science.gov (United States)

    Wang, Yung-Cheng; Shyu, Lih-Horng; Chang, Chung-Ping

    2010-01-01

    The optical structure of general commercial interferometers, e.g., the Michelson interferometers, is based on a non-common optical path. Such interferometers suffer from environmental effects because of the different phase changes induced in different optical paths and consequently the measurement precision will be significantly influenced by tiny variations of the environmental conditions. Fabry-Perot interferometers, which feature common optical paths, are insensitive to environmental disturbances. That would be advantageous for precision displacement measurements under ordinary environmental conditions. To verify and analyze this influence, displacement measurements with the two types of interferometers, i.e., a self-fabricated Fabry-Perot interferometer and a commercial Michelson interferometer, have been performed and compared under various environmental disturbance scenarios. Under several test conditions, the self-fabricated Fabry-Perot interferometer was obviously less sensitive to environmental disturbances than a commercial Michelson interferometer. Experimental results have shown that induced errors from environmental disturbances in a Fabry-Perot interferometer are one fifth of those in a Michelson interferometer. This has proved that an interferometer with the common optical path structure will be much more independent of environmental disturbances than those with a non-common optical path structure. It would be beneficial for the solution of interferometers utilized for precision displacement measurements in ordinary measurement environments.

  4. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  5. Assessment of energy utilization and leakages in buildings with building information model energy

    Directory of Open Access Journals (Sweden)

    Egwunatum I. Samuel

    2017-03-01

    Full Text Available Given the ability of building information models (BIM to serve as a multidisciplinary data repository, this study attempts to explore and exploit the sustainability value of BIM in delivering buildings that require less energy for operations, emit less carbon dioxide, and provide conducive living environments for occupants. This objective was attained by a critical and extensive literature review that covers the following: (1 building energy consumption, (2 building energy performance and analysis, and (3 BIM and energy assessment. Literature cited in this paper shows that linking an energy analysis tool with a BIM model has helped project design teams to predict and create optimized energy consumption by conducting building energy performance analysis utilizing key performance indicators on average thermal transmitters, resulting heat demand, lighting power, solar heat gains, and ventilation heat losses. An in-depth analysis was conducted on a completed BIM integrated construction project utilizing the Arboleda Project in the Dominican Republic to validate the aforementioned findings. Results show that the BIM-based energy analysis helped the design team attain the world׳s first positive energy building. This study concludes that linking an energy analysis tool with a BIM model helps to expedite the energy analysis process, provide more detailed and accurate results, and deliver energy-efficient buildings. This study further recommends that the adoption of level 2 BIM and BIM integration in energy optimization analysis must be demanded by building regulatory agencies for all projects regardless of procurement method (i.e., government funded or otherwise or size.

  6. Development and validation of a preference based measure derived from the Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR for use in cost utility analyses

    Directory of Open Access Journals (Sweden)

    Meads David M

    2008-08-01

    Full Text Available Abstract Background Pulmonary Hypertension is a severe and incurable disease with poor prognosis. A suite of new disease-specific measures – the Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR – was recently developed for use in this condition. The purpose of this study was to develop and validate a preference based measure from the CAMPHOR that could be used in cost-utility analyses. Methods Items were selected that covered major issues covered by the CAMPHOR QoL scale (activities, travelling, dependence and communication. These were used to create 36 health states that were valued by 249 people representative of the UK adult population, using the time trade-off (TTO technique. Data from the TTO interviews were analysed using both aggregate and individual level modelling. Finally, the original CAMPHOR validation data were used to validate the new preference based model. Results The predicted health state values ranged from 0.962 to 0.136. The mean level model selected for analyzing the data had good explanatory power (0.936, did not systematically over- or underestimate the observed mean health state values and showed no evidence of auto correlation in the prediction errors. The value of less than 1 reflects a background level of ill health in state 1111, as judged by the respondents. Scores derived from the new measure had excellent test-retest reliability (0.85 and construct validity. The CAMPHOR utility score appears better able to distinguish between WHO functional classes (II and III than the EQ-5D and SF-6D. Conclusion The tariff derived in this study can be used to classify an individual into a health state based on their responses to the CAMPHOR. The results of this study widen the evidence base for conducting economic evaluations of interventions designed to improve QoL for patients with PH.

  7. Utility of Parental Mediation Model on Youth’s Problematic Online Gaming

    OpenAIRE

    Benrazavi, R; Teimouri, M; Griffiths, MD

    2015-01-01

    The Parental Mediation Model PMM) was initially designed to regulate children’s attitudes towards the traditional media. In the present era, because of prevalent online media there is a need for similar regulative measures. Spending long hours on social media and playing online games increase the risks of exposure to the negative outcomes of online gaming. This paper initially applied the PMM developed by European Kids Online to (i) test the reliability and validity of this model and (ii) ide...

  8. Mechanistic modeling study on process optimization and precursor utilization with atmospheric spatial atomic layer deposition

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Zhang; He, Wenjie; Duan, Chenlong [State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Chen, Rong, E-mail: rongchen@mail.hust.edu.cn [State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Shan, Bin [State Key Laboratory of Material Processing and Die & Mould Technology, School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)

    2016-01-15

    Spatial atomic layer deposition (SALD) is a promising technology with the aim of combining the advantages of excellent uniformity and conformity of temporal atomic layer deposition (ALD), and an industrial scalable and continuous process. In this manuscript, an experimental and numerical combined model of atmospheric SALD system is presented. To establish the connection between the process parameters and the growth efficiency, a quantitative model on reactant isolation, throughput, and precursor utilization is performed based on the separation gas flow rate, carrier gas flow rate, and precursor mass fraction. The simulation results based on this model show an inverse relation between the precursor usage and the carrier gas flow rate. With the constant carrier gas flow, the relationship of precursor usage and precursor mass fraction follows monotonic function. The precursor concentration, regardless of gas velocity, is the determinant factor of the minimal residual time. The narrow gap between precursor injecting heads and the substrate surface in general SALD system leads to a low Péclet number. In this situation, the gas diffusion act as a leading role in the precursor transport in the small gap rather than the convection. Fluid kinetics from the numerical model is independent of the specific structure, which is instructive for the SALD geometry design as well as its process optimization.

  9. Model of sustainable utilization of organic solids waste in Cundinamarca, Colombia

    Directory of Open Access Journals (Sweden)

    Solanyi Castañeda Torres

    2017-05-01

    Full Text Available Introduction: This article considers a proposal of a model of use of organic solids waste for the department of Cundinamarca, which responds to the need for a tool to support decision-making for the planning and management of organic solids waste. Objective: To perform an approximation of a conceptual technical and mathematician optimization model to support decision-making in order to minimize environmental impacts. Materials and methods: A descriptive study was applied due to the fact that some fundamental characteristics of the studied homogeneous phenomenon are presented and it is also considered to be quasi experimental. The calculation of the model for plants of the department is based on three axes (environmental, economic and social, that are present in the general equation of optimization. Results: A model of harnessing organic solids waste in the techniques of biological treatment of composting aerobic and worm cultivation is obtained, optimizing the system with the emissions savings of greenhouse gases spread into the atmosphere, and in the reduction of the overall cost of final disposal of organic solids waste in sanitary landfill. Based on the economic principle of utility that determines the environmental feasibility and sustainability in the plants of harnessing organic solids waste to the department, organic fertilizers such as compost and humus capture carbon and nitrogen that reduce the tons of CO2.

  10. Generation, validation, and utilization of a three-dimensional pharmacophore model for EP3 antagonists.

    Science.gov (United States)

    Mishra, Rama K; Singh, Jasbir

    2010-08-23

    Studies reported here are aimed to investigate the important structural features that characterize the human EP(3) antagonists. Based on the knowledge of low-energy conformation of the endogenous ligand, the initial hit analogs were prepared. Subsequently, a ligand-based lead optimization approach using pharmacophore model generation was utilized. A 5-point pharmacophore using a training set of 19 compounds spanning the IC(50) data over 4-log order was constructed using the HypoGen module of Catalyst. Following pharmacophore customization, using a linear structure-activity regression equation, a six feature three-dimensional predictive pharmacophore model, P6, was built, which resulted in improved predictive power. The P6 model was validated using a test set of 11 compounds providing a correlation coefficient (R(2)) of 0.90 for predictive versus experimental EP(3) IC(50) values. This pharmacophore model has been expanded to include diverse chemotypes, and the predictive ability of the customized pharmacophore has been tested.

  11. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    Science.gov (United States)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  12. Measurement of a model of implementation for health care: toward a testable theory

    Directory of Open Access Journals (Sweden)

    Cook Joan M

    2012-07-01

    Full Text Available Abstract Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures.

  13. Integrating utilization-focused evaluation with business process modeling for clinical research improvement.

    Science.gov (United States)

    Kagan, Jonathan M; Rosas, Scott; Trochim, William M K

    2010-10-01

    New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

  14. Estimation of a Valuation Function for a Diabetes Mellitus-Specific Preference-Based Measure of Health: The Diabetes Utility Index®

    OpenAIRE

    Murali Sundaram; Smith, Michael J.; Revicki, Dennis A.; Lesley-Ann Miller; Suresh Madhavan; Gerry Hobbs

    2010-01-01

    Background: Preference-based measures of health (PBMH) provide 'preference' or 'utility' weights that enable the calculation of QALYs for the economic evaluations of interventions. The Diabetes Utility Index (DUI) was developed as a brief, self-administered, diabetes mellitus-specific PBMH that can efficiently estimate patient-derived health state utilities. Objective: To describe the development of the valuation function for the DUI, and to report the validation results of the valuation func...

  15. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local cont

  16. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  17. Individual differences in self-concept among smokers attempting to quit: Validation and predictive utility of measures of the smoker self-concept and abstainer self-concept.

    Science.gov (United States)

    Shadel, W G; Mermelstein, R

    1996-09-01

    We tested a theoretical model of individual differences in smoking cessation using a social-cognitive conception of the self-concept. We developed and validated measures of the smoker self-concept and the abstainer self-concept. Each scale was shown to have good internal reliability and construct validity and was distinct from other important predictive measures used in smoking research (e.g. Fagerstrom Tolerance Questionnaire, smoking rate, motivation, self-efficacy). Importantly, we demonstrated the predictive validity of the self-concept scales. The interaction of baseline measures of the smoker self-concept and abstainer self-concept predicted smoking status three months after treatment; subjects were most likely to be abstinent if they began treatment with a strong abstainer selfconcept and a weak smoker self-concept. This interaction held over and above baseline smoking rate, Fagerstrom Tolerance scores, and measures of motivation and self-efficacy to quit. The utility of social-cognitive individual difference models and potential patient-treatment matching interventions are discussed.

  18. Structural modelling of thrust zones utilizing photogrammetry: Western Champsaur basin, SE France

    Science.gov (United States)

    Totake, Yukitsugu; Butler, Rob; Bond, Clare

    2016-04-01

    Recent advances in photogrammetric technologies allow geoscientists to easily obtain a high-resolution 3D geospatial data across multiple scales, from rock specimen to landscape. Although resolution and accuracy of photogrammetry models are dependent on various factors (a quality of photography, number of overlapping photo images, distance to targets, etc), modern photogrammetry techniques can even provide a comparable data resolution to laser scanning technologies (modelling of various geological objects. Another advantages of photogrammetry techniques, high portability and low costs for infrastructures, ease to incorporate these techniques with conventional geological surveys. Photogrammetry techniques have a great potential to enhance performances of geological surveys. We present a workflow for building basin-scale 3D structural models utilizing the ground-based photogrammetry along with field observations. The workflow is applied to model thrust zones in Eocene-Oligocene turbidite sequences called Champsaur Sandstone (Gres du Champsaur) filling an Alpine fore-deep basin, Western Champsaur basin, in southeastern France. The study area is located ca. 20km northeast from Gap, and approximately extends 10 km from east to west and 6 km from north to south. During a 2-week fieldwork, over 9400 photographs were taken at 133 locations by a handheld digital camera from ground, and were georeferenced with a handheld GPS. Photo images were processed within software PhotoScan to build a 3D photogrammetric model. The constructed photogrammetry model was then imported into software Move to map faults and geological layers along with georeferenced field data so that geological cross sections and 3D surfaces are produced. The workflow succeeded to produce a detailed topography and textures of landscape at ~1m resolution, and enabled to characterize thrust systems in the study area at bed-scale resolution. Three-dimensionally characterized architectures of thrust zones at high

  19. Modeling menopause: The utility of rodents in translational behavioral endocrinology research.

    Science.gov (United States)

    Koebele, Stephanie V; Bimonte-Nelson, Heather A

    2016-05-01

    The human menopause transition and aging are each associated with an increase in a variety of health risk factors including, but not limited to, cardiovascular disease, osteoporosis, cancer, diabetes, stroke, sexual dysfunction, affective disorders, sleep disturbances, and cognitive decline. It is challenging to systematically evaluate the biological underpinnings associated with the menopause transition in the human population. For this reason, rodent models have been invaluable tools for studying the impact of gonadal hormone fluctuations and eventual decline on a variety of body systems. While it is essential to keep in mind that some of the mechanisms associated with aging and the transition into a reproductively senescent state can differ when translating from one species to another, animal models provide researchers with opportunities to gain a fundamental understanding of the key elements underlying reproduction and aging processes, paving the way to explore novel pathways for intervention associated with known health risks. Here, we discuss the utility of several rodent models used in the laboratory for translational menopause research, examining the benefits and drawbacks in helping us to better understand aging and the menopause transition in women. The rodent models discussed are ovary-intact, ovariectomy, and 4-vinylcylohexene diepoxide for the menopause transition. We then describe how these models may be implemented in the laboratory, particularly in the context of cognition. Ultimately, we aim to use these animal models to elucidate novel perspectives and interventions for maintaining a high quality of life in women, and to potentially prevent or postpone the onset of negative health consequences associated with these significant life changes during aging. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Utility of population pharmacokinetic modeling in the assessment of therapeutic protein-drug interactions.

    Science.gov (United States)

    Chow, Andrew T; Earp, Justin C; Gupta, Manish; Hanley, William; Hu, Chuanpu; Wang, Diane D; Zajic, Stefan; Zhu, Min

    2014-05-01

    Assessment of pharmacokinetic (PK) based drug-drug interactions (DDI) is essential for ensuring patient safety and drug efficacy. With the substantial increase in therapeutic proteins (TP) entering the market and drug development, evaluation of TP-drug interaction (TPDI) has become increasingly important. Unlike for small molecule (e.g., chemical-based) drugs, conducting TPDI studies often presents logistical challenges, while the population PK (PPK) modeling may be a viable approach dealing with the issues. A working group was formed with members from the pharmaceutical industry and the FDA to assess the utility of PPK-based TPDI assessment including study designs, data analysis methods, and implementation strategy. This paper summarizes key issues for consideration as well as a proposed strategy with focuses on (1) PPK approach for exploratory assessment; (2) PPK approach for confirmatory assessment; (3) importance of data quality; (4) implementation strategy; and (5) potential regulatory implications. Advantages and limitations of the approach are also discussed.

  1. Mathematical model of a utility firm. Final technical report, Part I

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    Utility companies are in the predicament of having to make forecasts, and draw up plans for the future, in an increasingly fluid and volatile socio-economic environment. The project being reported is to contribute to an understanding of the economic and behavioral processes that take place within a firm, and without it. Three main topics are treated. One is the representation of the characteristics of the members of an organization, to the extent to which characteristics seem pertinent to the processes of interest. The second is the appropriate management of the processes of change by an organization. The third deals with the competitive striving towards an economic equilibrium among the members of a society in the large, on the theory that this process might be modeled in a way which is similar to the one for the intra-organizational ones. This volume covers mainly the first topic.

  2. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  3. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  4. Utilization of isolated marine mussel cells as an in vitro model to assess xenobiotics induced genotoxicity.

    Science.gov (United States)

    Zhang, Y F; Chen, S Y; Qu, M J; Adeleye, A O; Di, Y N

    2017-10-01

    Freshly isolated cells are used as an ideal experimental model in in vitro toxicology analysis, especially the detection of diverse xenobiotics induced genotoxic effects. In present study, heavy metals (Zn, Cu, Cd, Pb) and PCBs were selected as representative xenobiotics to verify the ability of in vitro model in assessing genotoxic effects in cells of marine mussels (Mytilus galloprovincialis). DNA damage and chromosome aberration were assessed in freshly isolated cells from haemolymph, gill and digestive gland by single cell gel electrophoresis and micronucleus assay respectively. Gill cells showed more sensitive to Zn exposure among three types of cells, indicating tissue-specific genotoxicity. Significantly higher DNA aberrations were induced by Cu in haemocytes compared to Cd and Pb, indicating chemical-specific genotoxicity. An additive effect was detected after combined heavy metals and PCBs exposure, suggesting the interaction of selected xenobiotics. To our knowledge, this is the first attempt to study the complex effects of organic and/or inorganic contaminants using freshly isolated cells from marine mussels. Genetic responses are proved to occur and maintained in vitro in relation to short-term xenobiotics induced stresses. The utilization of the in vitro model could provide a rapid tool to investigate the comprehensive toxic effects in marine invertebrates and monitor environmental health. Copyright © 2017. Published by Elsevier Ltd.

  5. Decision Support for Test Trench Location Selection with 3D Semantic Subsurface Utility Models

    NARCIS (Netherlands)

    Racz, Paulina; Syfuss, Lars; Schultz, Carl; van Buiten, Marinus; olde Scholtenhuis, Léon Luc; Vahdatikhaki, Faridaddin; Doree, Andries G.; Lin, Ken-Yu; El-Gohary, Nora; Tang, Pingbo

    Subsurface utility construction work often involves repositioning of, and working between, existing buried networks. While the amount of utilities in modern cities grows, excavation work becomes more prone to incidents. To prevent such incidents, excavation workers request existing 2D utility maps,

  6. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  7. Modelling of landfill gas adsorption with bottom ash for utilization of renewable energy

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Chen

    2011-10-06

    Energy crisis, environment pollution and climate change are the serious challenges to people worldwide. In the 21st century, human being is trend to research new technology of renewable energy, so as to slow down global warming and develop society in an environmentally sustainable method. Landfill gas, produced by biodegradable municipal solid waste in landfill, is a renewable energy source. In this work, landfill gas utilization for energy generation is introduced. Landfill gas is able to produce hydrogen by steam reforming reactions. There is a steam reformer equipment in the fuel cells system. A sewage plant of Cologne in Germany has run the Phosphoric Acid Fuel Cells power station with biogas for more than 50,000 hours successfully. Landfill gas thus may be used as fuel for electricity generation via fuel cells system. For the purpose of explaining the possibility of landfill gas utilization via fuel cells, the thermodynamics of landfill gas steam reforming are discussed by simulations. In practice, the methane-riched gas can be obtained by landfill gas purification and upgrading. This work investigate a new method for upgrading-landfill gas adsorption with bottom ash experimentally. Bottom ash is a by-product of municipal solid waste incineration, some of its physical and chemical properties are analysed in this work. The landfill gas adsorption experimental data show bottom ash can be used as a potential adsorbent for landfill gas adsorption to remove CO{sub 2}. In addition, the alkalinity of bottom ash eluate can be reduced in these adsorption processes. Therefore, the interactions between landfill gas and bottom ash can be explained by series reactions accordingly. Furthermore, a conceptual model involving landfill gas adsorption with bottom ash is developed. In this thesis, the parameters of landfill gas adsorption equilibrium equations can be obtained by fitting experimental data. On the other hand, these functions can be deduced with theoretical approach

  8. Measurement and modeling of advanced coal conversion processes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. (Advanced Fuel Research, Inc., East Hartford, CT (USA)); Smoot, L.D.; Brewster, B.S. (Brigham Young Univ., Provo, UT (USA))

    1990-01-01

    The overall objective of this program is the development of predictive capability for the design, scale up, simulation, control and feedstock evaluation in advanced coal conversion devices. This technology is important to reduce the technical and economic risks inherent in utilizing coal, a feedstock whose variable and often unexpected behavior presents a significant challenge. This program will merge significant advances made at Advanced Fuel Research, Inc. (AFR) in measuring and quantitatively describing the mechanisms in coal conversion behavior, with technology being developed at Brigham Young University (BYU) in comprehensive computer codes for mechanistic modeling of entrained-bed gasification. Additional capabilities in predicting pollutant formation will be implemented and the technology will be expanded to fixed-bed reactors. The foundation to describe coal-specific conversion behavior is AFR's Functional Group (FG) and Devolatilization, Vaporization, and Crosslinking (DVC) models, developed under previous and on-going METC sponsored programs. These models have demonstrated the capability to describe the time dependent evolution of individual gas species, and the amount and characteristics of tar and char. The combined FG-DVC model will be integrated with BYU's comprehensive two-dimensional reactor model, PCGC-2, which is currently the most widely used reactor simulation for combustion or gasification. Success in this program will be a major step in improving in predictive capabilities for coal conversion processes including: demonstrated accuracy and reliability and a generalized first principles'' treatment of coals based on readily obtained composition data. The progress during the fifteenth quarterly of the program is presented. 56 refs., 41 figs., 5 tabs.

  9. Utilization and cost of a new model of care for managing acute knee injuries: the Calgary acute knee injury clinic

    Directory of Open Access Journals (Sweden)

    Lau Breda HF

    2012-12-01

    Full Text Available Abstract Background Musculoskeletal disorders (MSDs affect a large proportion of the Canadian population and present a huge problem that continues to strain primary healthcare resources. Currently, the Canadian healthcare system depicts a clinical care pathway for MSDs that is inefficient and ineffective. Therefore, a new inter-disciplinary team-based model of care for managing acute knee injuries was developed in Calgary, Alberta, Canada: the Calgary Acute Knee Injury Clinic (C-AKIC. The goal of this paper is to evaluate and report on the appropriateness, efficiency, and effectiveness of the C-AKIC through healthcare utilization and costs associated with acute knee injuries. Methods This quasi-experimental study measured and evaluated cost and utilization associated with specific healthcare services for patients presenting with acute knee injuries. The goal was to compare patients receiving care from two clinical care pathways: the existing pathway (i.e. comparison group and a new model, the C-AKIC (i.e. experimental group. This was accomplished through the use of a Healthcare Access and Patient Satisfaction Questionnaire (HAPSQ. Results Data from 138 questionnaires were analyzed in the experimental group and 136 in the comparison group. A post-hoc analysis determined that both groups were statistically similar in socio-demographic characteristics. With respect to utilization, patients receiving care through the C-AKIC used significantly less resources. Overall, patients receiving care through the C-AKIC incurred 37% of the cost of patients with knee injuries in the comparison group and significantly incurred less costs when compared to the comparison group. The total aggregate average cost for the C-AKIC group was $2,549.59 compared to $6,954.33 for the comparison group (p Conclusions The Calgary Acute Knee Injury Clinic was able to manage and treat knee injured patients for less cost than the existing state of healthcare delivery. The

  10. Measurements of the Backstreaming Proton IONS in the Self-Magnetic Pinch (SMP) Diode Utilizing Copper Activation Technique

    Science.gov (United States)

    Mazarakis, Michael; Cuneo, Michael; Fournier, Sean; Johnston, Mark; Kiefer, Mark; Leckbee, Joshua; Simpson, Sean; Renk, Timothy; Webb, Timothy; Bennett, Nichelle

    2016-10-01

    The results presented here were obtained with an SMP diode mounted at the front high voltage end of the 8-10-MV RITS Self-Magnetically Insulated Transmission Line (MITL) voltage adder. Our experiments had two objectives: first, to measure the contribution of the back-streaming proton currents emitted from the anode target, and second, to evaluate the energy of those ions and hence the actual Anode-Cathode (A-K) gap voltage. The accelerating voltage quoted in the literature is estimated utilizing para-potential flow theories. Thus, it is interesting to have another independent measurement of the A-K voltage. We have measured the back-streaming protons emitted from the anode and propagating through a hollow cathode tip for various diode configurations and different techniques of target cleaning treatment, namely, heating at very high temperatures with DC and pulsed current, with RF plasma cleaning, and with both plasma cleaning and heating. We have also evaluated the A-K gap voltage by energy filtering techniques. Sandia is operated by Sandia Corporation, a subsidiary of Lockheed Martin Company, for the US DOE NNSA under Contract No. DE-AC04-94AL85000.

  11. Deriving welfare measures from discrete choice experiments: inconsistency between current methods and random utility and welfare theory.

    Science.gov (United States)

    Lancsar, Emily; Savage, Elizabeth

    2004-09-01

    Discrete choice experiments (DCEs) are being used increasingly in health economics to elicit preferences for products and programs. The results of such experiments have been used to calculate measures of welfare or more specifically, respondents' 'willingness to pay' (WTP) for products and programs and their 'marginal willingness to pay' (MWTP) for the attributes that make up such products and programs. In this note we show that the methods currently used to derive measures of welfare from DCEs in the health economics literature are not consistent with random utility theory (RUT), or with microeconomic welfare theory more generally. The inconsistency with welfare theory is an important limitation on the use of such WTP estimates in cost-benefit analyses. We describe an alternative method of deriving measures of welfare (compensating variation) from DCEs that is consistent with RUT and is derived using welfare theory. We demonstrate its use in an empirical application to derive the WTP for asthma medication and compare it to the results elicited from the method currently used in the health economics literature.

  12. A physical model for measuring thermally-induced block displacements

    Science.gov (United States)

    Bakun-Mazor, Dagan; Feldhiem, Aviran; Keissar, Yuval; Hatzor, Yossef H.

    2016-04-01

    A new model for thermally-induced block displacement in discontinuous rock slopes has been recently suggested. The model consists of a discrete block that is separated from the rock mass by a tension crack and rests on an inclined plane. The tension crack is filled with a wedge block or rock fragments. Irreversible block sliding is assumed to develop in response to climatic thermal fluctuations and consequent contraction and expansion of the sliding block material. While a tentative analytical solution for this model is already available, we are exploring here the possibility of obtaining such a permanent, thermally-induced, rock block displacement, under fully controlled conditions at the laboratory, and the sensitivity of the mechanism to geometry, mechanical properties, and temperature fluctuations. A large scale concrete physical model (50x150x60 cm^3) is being examined in a Climate-Controlled Room (CCR). The CCR permits accurate control of ambient temperature from 5 to 45 Celsius degrees. The permanent plastic displacement is being measured using four displacement transducers and a high resolution (29M pixel) visual range camera. A series of thermocouples measure the heating front inside the sliding block, hence thermal diffusivity is evaluated from the measured thermal gradient and heat flow. In order to select the appropriate concrete mixture, the mechanical and thermo-physical properties of concrete samples are determined in the lab. Friction angle and shear stiffness of the sliding interface are determined utilizing a hydraulic, servo-controlled direct shear apparatus. Uniaxial compression tests are performed to determine the uniaxial compressive strength, Young's modulus and Poison's ratio of the intact block material using a stiff triaxial load frame. Thermal conductivity and linear thermal expansion coefficient are determined experimentally using a self-constructed measuring system. Due to the fact that this experiment is still in progress, preliminary

  13. The utility of the Health Plan Employer Data and Information Set (HEDIS) asthma measure to predict asthma-related outcomes.

    Science.gov (United States)

    Berger, William E; Legorreta, Antonio P; Blaiss, Michael S; Schneider, Eric C; Luskin, Allan T; Stempel, David A; Suissa, Samy; Goodman, David C; Stoloff, Stuart W; Chapman, Jean A; Sullivan, Sean D; Vollmer, Bill; Weiss, Kevin B

    2004-12-01

    The Health Plan Employer Data and Information Set (HEDIS) measures are used extensively to measure quality of care. To evaluate selected aspects of the HEDIS measure of appropriate use of asthma medications. Claims data were analyzed for commercial health plan members who met HEDIS criteria for persistent asthma in 1999. The use of asthma medications was evaluated in the subsequent year with stratification by controller medication and a measure of adherence (days' supply). Multivariate logistic regressions were used to evaluate the association among long-term controller therapy for persistent asthma, adherence to therapy, and asthma-related hospitalizations or emergency department (ED) visits, controlling for demographic, preindex utilization, and other confounding characteristics. Of the 49,637 persistent asthma patients, approximately 35.7% were using 1 class of long-term controller medications, 18.4% were using more than 1 class, and 45.9% were not using such medication. More than 25% of the persistent asthma patients did not use any asthma medication in the subsequent year. Patients with low adherence to controller medication had a significantly higher risk (odds ratio [OR], 1.72; 95% confidence interval [CI], 1.42-2.08) of ED visit or hospitalization relative to patients not using any controllers compared with persons with moderate (OR, 0.84; 95% CI, 0.57-1.23) or high (OR, 0.70; 95% CI, 0.34-1.44) adherence. Patients receiving a high days' supply of inhaled corticosteroids had the lowest risk of ED visit or hospitalization (OR, 0.37; 95% CI, 0.05-2.69). Our findings suggest that refinements to the HEDIS measure method for identifying patients with persistent asthma may be needed.

  14. Effectiveness and Utility of a Case-Based Model for Delivering Engineering Ethics Professional Development Units

    Directory of Open Access Journals (Sweden)

    Heidi Ann Hahn

    2015-04-01

    Full Text Available This article describes an action research project conducted at Los Alamos National Laboratory (LANL to resolve a problem with the ability of licensed and/or certified engineers to obtain the ethics-related professional development units or hours (PDUs or PDHs needed to maintain their credentials. Because of the recurring requirement and the static nature of the information, an initial, in-depth training followed by annually updated refresher training was proposed. A case model approach, with online delivery, was selected as the optimal pedagogical model for the refresher training. In the first two years, the only data that was collected was throughput and information retention. Response rates indicated that the approach was effective in helping licensed professional engineers obtain the needed PDUs. The rates of correct responses suggested that knowledge transfer regarding ethical reasoning had occurred in the initial training and had been retained in the refresher. In FY13, after completing the refresher, learners received a survey asking their opinion of the effectiveness and utility of the course, as well as their impressions of the case study format vs. the typical presentation format. Results indicate that the courses have been favorably received and that the case study method supports most of the pedagogical needs of adult learners as well as, if not better than, presentation-based instruction. Future plans for improvement are focused on identifying and evaluating methods for enriching online delivery of the engineering ethics cases.

  15. Impact of the health services utilization and improvement model (HUIM) on self efficacy and satisfaction among a head start population.

    Science.gov (United States)

    Tataw, David B; Bazargan-Hejazi, Shahrzad

    2010-01-01

    The aim of this paper is to evaluate and report the impact of the Health Services Utilization Improvement Model (HUIM) on utilization and satisfaction with care, as well as knowledge regarding prevention, detection, and treatment of asthma, diabetes, tuberculosis, and child injury among low income health services consumers. HUIM outcomes data shows that the coupling of parental education and ecological factors (service linkage and provider orientation) impacts the health services utilization experience of low income consumers evidenced by improved self-efficacy (knowledge and voice), and satisfaction with care from a child's regular provider. Participation in HUIM activities also improved the low income consumer's knowledge of disease identification, self-care and prevention.

  16. Mars Colony in situ resource utilization: An integrated architecture and economics model

    Science.gov (United States)

    Shishko, Robert; Fradet, René; Do, Sydney; Saydam, Serkan; Tapia-Cortez, Carlos; Dempster, Andrew G.; Coulton, Jeff

    2017-09-01

    This paper reports on our effort to develop an ensemble of specialized models to explore the commercial potential of mining water/ice on Mars in support of a Mars Colony. This ensemble starts with a formal systems architecting framework to describe a Mars Colony and capture its artifacts' parameters and technical attributes. The resulting database is then linked to a variety of ;downstream; analytic models. In particular, we integrated an extraction process (i.e., ;mining;) model, a simulation of the colony's environmental control and life support infrastructure known as HabNet, and a risk-based economics model. The mining model focuses on the technologies associated with in situ resource extraction, processing, storage and handling, and delivery. This model computes the production rate as a function of the systems' technical parameters and the local Mars environment. HabNet simulates the fundamental sustainability relationships associated with establishing and maintaining the colony's population. The economics model brings together market information, investment and operating costs, along with measures of market uncertainty and Monte Carlo techniques, with the objective of determining the profitability of commercial water/ice in situ mining operations. All told, over 50 market and technical parameters can be varied in order to address ;what-if; questions, including colony location.

  17. Development and verification of a model for estimating the screening utility in the detection of PCBs in transformer oil.

    Science.gov (United States)

    Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya

    2014-01-01

    A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.

  18. Estimating urban roadside emissions with an atmospheric dispersion model based on in-field measurements.

    Science.gov (United States)

    Pu, Yichao; Yang, Chao

    2014-09-01

    Urban vehicle emission models have been utilized to calculate pollutant concentrations at both microscopic and macroscopic levels based on vehicle emission rates which few researches have been able to validate. The objective of our research is to estimate urban roadside emissions and calibrate it with in-field measurement data. We calculated the vehicle emissions based on localized emission rates, and used an atmospheric dispersion model to estimate roadside emissions. A non-linear regression model was applied to calibrate the localized emission rates using in-field measurement data. With the calibrated emission rates, emissions on urban roadside can be estimated with a high accuracy.

  19. Knowledge Translation for Research Utilization: Design of a Knowledge Translation Model at Tehran University of Medical Sciences

    Science.gov (United States)

    Majdzadeh, Reza; Sadighi, Jila; Nejat, Saharnaz; Mahani, Ali Shahidzade; Gholami, Jaleh

    2008-01-01

    Introduction: The present study aimed to generate a model that would provide a conceptual framework for linking disparate components of knowledge translation. A theoretical model of such would enable the organization and evaluation of attempts to analyze current conditions and to design interventions on the transfer and utilization of research…

  20. A Model for Measuring Puffery Effects.

    Science.gov (United States)

    Vanden Bergh, Bruce G.; Reid, Leonard N.

    The purpose of this paper is to describe and discuss a conceptual model for experimentally investigating the effects of advertising puffery. The various sections contain a discussion of puffery as a legal concept, a description and discussion of the proposed model, research support for the model, and implications for future research on puffery.…

  1. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    Science.gov (United States)

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing

  2. Compressor Part I: Measurement and Design Modeling

    Directory of Open Access Journals (Sweden)

    Thomas W. Bein

    1999-01-01

    method used to design the 125-ton compressor is first reviewed and some related performance curves are predicted based on a quasi-3D method. In addition to an overall performance measurement, a series of instruments were installed on the compressor to identify where the measured performance differs from the predicted performance. The measurement techniques for providing the diagnostic flow parameters are also described briefly. Part II of this paper provides predictions of flow details in the areas of the compressor where there were differences between the measured and predicted performance.

  3. Multiscale analysis of surface soil moisture dynamics in a mesoscale catchment utilizing an integrated ecohydrological model

    Science.gov (United States)

    Korres, W.; Reichenau, T. G.; Schneider, K.

    2012-12-01

    Soil moisture is one of the fundamental variables in hydrology, meteorology and agriculture, influencing the partitioning of solar energy into latent and sensible heat flux as well as the partitioning of precipitation into runoff and percolation. Numerous studies have shown that in addition to natural factors (rainfall, soil, topography etc.) agricultural management is one of the key drivers for spatio-temporal patterns of soil moisture in agricultural landscapes. Interactions between plant growth, soil hydrology and soil nitrogen transformation processes are modeled by using a dynamically coupled modeling approach. The process-based ecohydrological model components of the integrated decision support system DANUBIA are used to identify the important processes and feedbacks determining soil moisture patterns in agroecosystems. Integrative validation of plant growth and surface soil moisture dynamics serves as a basis for a spatially distributed modeling analysis of surface soil moisture patterns in the northern part of the Rur catchment (1100 sq km), Western Germany. An extensive three year dataset (2007-2009) of surface soil moisture-, plant- (LAI, organ specific biomass and N) and soil- (texture, N, C) measurements was collected. Plant measurements were carried out biweekly for winter wheat, maize, and sugar beet during the growing season. Soil moisture was measured with three FDR soil moisture stations. Meteorological data was measured with an eddy flux station. The results of the model validation showed a very good agreement between the modeled plant parameters (biomass, green LAI) and the measured parameters with values between 0.84 and 0.98 (Willmotts index of agreement). The modeled surface soil moisture (0 - 20 cm) showed also a very favorable agreement with the measurements for winter wheat and sugar beet with an RMSE between 1.68 and 3.45 Vol.-%. For maize, the RMSE was less favorable particularly in the 1.5 months prior to harvest. The modeled soil

  4. Resource planning for gas utilities: Using a model to analyze pivotal issues

    Energy Technology Data Exchange (ETDEWEB)

    Busch, J.F.; Comnes, G.A.

    1995-11-01

    With the advent of wellhead price decontrols that began in the late 1970s and the development of open access pipelines in the 1980s and 90s, gas local distribution companies (LDCs) now have increased responsibility for their gas supplies and face an increasingly complex array of supply and capacity choices. Heretofore this responsibility had been share with the interstate pipelines that provide bundled firm gas supplies. Moreover, gas supply an deliverability (capacity) options have multiplied as the pipeline network becomes increasing interconnected and as new storage projects are developed. There is now a fully-functioning financial market for commodity price hedging instruments and, on interstate Pipelines, secondary market (called capacity release) now exists. As a result of these changes in the natural gas industry, interest in resource planning and computer modeling tools for LDCs is increasing. Although in some ways the planning time horizon has become shorter for the gas LDC, the responsibility conferred to the LDC and complexity of the planning problem has increased. We examine current gas resource planning issues in the wake of the Federal Energy Regulatory Commission`s (FERC) Order 636. Our goal is twofold: (1) to illustrate the types of resource planning methods and models used in the industry and (2) to illustrate some of the key tradeoffs among types of resources, reliability, and system costs. To assist us, we utilize a commercially-available dispatch and resource planning model and examine four types of resource planning problems: the evaluation of new storage resources, the evaluation of buyback contracts, the computation of avoided costs, and the optimal tradeoff between reliability and system costs. To make the illustration of methods meaningful yet tractable, we developed a prototype LDC and used it for the majority of our analysis.

  5. On the utilization of hydrological modelling for road drainage design under climate and land use change.

    Science.gov (United States)

    Kalantari, Zahra; Briel, Annemarie; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart

    2014-03-15

    Road drainage structures are often designed using methods that do not consider process-based representations of a landscape's hydrological response. This may create inadequately sized structures as coupled land cover and climate changes can lead to an amplified hydrological response. This study aims to quantify potential increases of runoff in response to future extreme rain events in a 61 km(2) catchment (40% forested) in southwest Sweden using a physically-based hydrological modelling approach. We simulate peak discharge and water level (stage) at two types of pipe bridges and one culvert, both of which are commonly used at Swedish road/stream intersections, under combined forest clear-cutting and future climate scenarios for 2050 and 2100. The frequency of changes in peak flow and water level varies with time (seasonality) and storm size. These changes indicate that the magnitude of peak flow and the runoff response are highly correlated to season rather than storm size. In all scenarios considered, the dimensions of the current culvert are insufficient to handle the increase in water level estimated using a physically-based modelling approach. It also appears that the water level at the pipe bridges changes differently depending on the size and timing of the storm events. The findings of the present study and the approach put forward should be considered when planning investigations on and maintenance for areas at risk of high water flows. In addition, the research highlights the utility of physically-based hydrological models to identify the appropriateness of road drainage structure dimensioning.

  6. The utility of behavioral economics in expanding the free-feed model of obesity.

    Science.gov (United States)

    Rasmussen, Erin B; Robertson, Stephen H; Rodriguez, Luis R

    2016-06-01

    Animal models of obesity are numerous and diverse in terms of identifying specific neural and peripheral mechanisms related to obesity; however, they are limited when it comes to behavior. The standard behavioral measure of food intake in most animal models occurs in a free-feeding environment. While easy and cost-effective for the researcher, the free-feeding environment omits some of the most important features of obesity-related food consumption-namely, properties of food availability, such as effort and delay to obtaining food. Behavior economics expands behavioral measures of obesity animal models by identifying such behavioral mechanisms. First, economic demand analysis allows researchers to understand the role of effort in food procurement, and how physiological and neural mechanisms are related. Second, studies on delay discounting contribute to a growing literature that shows that sensitivity to delayed food- and food-related outcomes is likely a fundamental process of obesity. Together, these data expand the animal model in a manner that better characterizes how environmental factors influence food consumption.

  7. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    Science.gov (United States)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  8. The potential of the Child Health Utility 9D Index as an outcome measure for child dental health

    Science.gov (United States)

    2014-01-01

    Background The Child Health Utility 9D (CHU9D) is a relatively new generic child health-related quality of life measure (HRQoL)—designed to be completed by children—which enables the calculation of utility values. The aim is to investigate the use of the CHU9D Index as an outcome measure for child dental health in New Zealand. Method A survey was conducted of children aged between 6 and 9 years attending for routine dental examinations in community clinics in Dunedin (New Zealand) in 2012. The CHU9D, a HRQoL, was used, along with the Child Perceptions Questionnaire (CPQ), a validated oral health-related quality of life (OHRQoL) measure. Socio-demographic characteristics (sex, age, ethnicity and household deprivation) were recorded. Dental therapists undertook routine clinical examinations, with charting recorded for each child for decayed, missing and filled deciduous teeth (dmft) at the d3 level. Results One hundred and forty 6-to-9-year-olds (50.7% female) took part in the study (93.3% participation rate). The mean d3mft was 2.4 (SD = 2.6; range 0 to 9). Both CHU9D and CPQ detected differences in the impact of dental caries, with scores in the expected direction: children who presented with caries had higher scores (indicating poorer OHRQoL) than those who were free of apparent caries. Children with no apparent caries had a higher mean CHU9D score than those with caries (indicating better HRQoL). The difference for the CPQ was statistically significant, but for CHU9D the difference was not significant. When the two indices were compared, there was a significant difference in mean CHU9D scores by the prevalence of CPQ and subscale impacts with children experiencing no impacts having mean CHU9D scores closer to 1.0 (representing perfect health). Conclusion The CHU9D may be useful in dental research. Further exploration in samples with different caries experience is required. The use of the CHU9D in child oral health studies will enable the calculation of

  9. Measurement-based load modeling: Theory and application

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.

  10. Methane emissions measurements of natural gas components using a utility terrain vehicle and portable methane quantification system

    Science.gov (United States)

    Johnson, Derek; Heltzel, Robert

    2016-11-01

    Greenhouse Gas (GHG) emissions are a growing problem in the United States (US). Methane (CH4) is a potent GHG produced by several stages of the natural gas sector. Current scrutiny focuses on the natural gas boom associated with unconventional shale gas; however, focus should still be given to conventional wells and outdated equipment. In an attempt to quantify these emissions, researchers modified an off-road utility terrain vehicle (UTV) to include a Full Flow Sampling system (FFS) for methane quantification. GHG emissions were measured from non-producing and remote low throughput natural gas components in the Marcellus region. Site audits were conducted at eleven locations and leaks were identified and quantified at seven locations including at a low throughput conventional gas and oil well, two out-of-service gathering compressors, a conventional natural gas well, a coalbed methane well, and two conventional and operating gathering compressors. No leaks were detected at the four remaining sites, all of which were coal bed methane wells. The total methane emissions rate from all sources measured was 5.3 ± 0.23 kg/hr, at a minimum.

  11. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  12. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  13. Lectures on dynamical models for quantum measurements

    NARCIS (Netherlands)

    Nieuwenhuizen, T.M.; Perarnau-llobet, M.; Balian, R.

    2014-01-01

    In textbooks, ideal quantum measurements are described in terms of the tested system only by the collapse postulate and Born's rule. This level of description offers a rather flexible position for the interpretation of quantum mechanics. Here we analyse an ideal measurement as a process of interacti

  14. Lectures on dynamical models for quantum measurements

    NARCIS (Netherlands)

    Nieuwenhuizen, T.M.; Perarnau-llobet, M.; Balian, R.

    2014-01-01

    In textbooks, ideal quantum measurements are described in terms of the tested system only by the collapse postulate and Born's rule. This level of description offers a rather flexible position for the interpretation of quantum mechanics. Here we analyse an ideal measurement as a process of

  15. Migration Flows: Measurement, Analysis and Modeling

    NARCIS (Netherlands)

    Willekens, F.J.; White, Michael J.

    2016-01-01

    This chapter is an introduction to the study of migration flows. It starts with a review of major definition and measurement issues. Comparative studies of migration are particularly difficult because different countries define migration differently and measurement methods are not harmonized. Insigh

  16. Refining Change Measure with the Rasch Model

    Science.gov (United States)

    Zaporozhets, Olga; Fox, Christine M.; Beltyukova, Svetlana A.; Laux, John M.; Piazza, Nick J.; Salyers, Kathleen

    2015-01-01

    This study was to develop a linear measure of change using University of Rhode Island Change Assessment items that represented Prochaska and DiClemente's theory. The resulting Toledo Measure of Change is short, is easy to use, and provides reliable scores for identification of individuals' stage of change and progression within that stage.

  17. Migration Flows: Measurement, Analysis and Modeling

    NARCIS (Netherlands)

    Willekens, F.J.; White, Michael J.

    2016-01-01

    This chapter is an introduction to the study of migration flows. It starts with a review of major definition and measurement issues. Comparative studies of migration are particularly difficult because different countries define migration differently and measurement methods are not harmonized.

  18. Migration Flows: Measurement, Analysis and Modeling

    NARCIS (Netherlands)

    Willekens, F.J.; White, Michael J.

    2016-01-01

    This chapter is an introduction to the study of migration flows. It starts with a review of major definition and measurement issues. Comparative studies of migration are particularly difficult because different countries define migration differently and measurement methods are not harmonized. Insigh

  19. CO{sub 2}-mitigation measures through reduction of fossil fuel burning in power utilities. Which road to go?

    Energy Technology Data Exchange (ETDEWEB)

    Kaupp, A. [Energetica International Inc., Suva (Fiji)

    1996-12-31

    Five conditions, at minimum, should be examined in the comparative analysis of CO{sub 2}-mitigation options for the power sector. Under the continuing constraint of scarce financial resources for any private or public investment in the power sector, the following combination of requirements characterise a successful CO{sub 2}-mitigation project: (1) Financial attractiveness for private or public investors. (2) Low, or even negative, long range marginal costs per ton of `CO{sub 2} saved`. (3) High impact on CO{sub 2}-mitigation, which indicates a large market potential for the measure. (4) The number of individual investments required to achieve the impact is relatively small. In other words, logistical difficulties in project implementation are minimised. (5) The projects are `socially fair` and have minimal negative impact on any segment of the society. This paper deals with options to reduce carbonaceous fuel burning in the power sector. Part I explains how projects should be selected and classified. Part II describes the technical options. Since reduction of carbonaceous fuel burning may be achieved through Demand Side Management (DSM) and Supply Side Management (SSM) both are treated. Within the context of this paper SSM does not mean to expand power supply as demand grows. It means to economically generate and distribute power as efficiently as possible. In too many instances DSM has degenerated into efficient lighting programs and utility managed incentives and rebate programs. To what extent this is a desirable situation for utilities in Developing Countries that face totally different problems as their counterparts in highly industrialised countries remains to be seen. Which road to go is the topic of this paper.

  20. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  1. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    Science.gov (United States)

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area.

  2. Utilizing observations of vegetation patterns to infer ecosystem parameters and test model predictions

    Science.gov (United States)

    Penny, G.; Daniels, K. E.; Thompson, S. E.

    2012-12-01

    Periodic vegetation patterns arise globally in arid and semi-arid environments, and are believed to indicate competing positive and negative feedbacks between resource availability and plant uptake at different length scales. The patterns have become the object of two separate research themes, one focusing on observation of ecosystem properties and vegetation morphology, and another focusing on the development of theoretical models and descriptions of pattern behavior. Given the growing body of work in both directions, there is a compelling need to unify both strands of research by bringing together observations of large-scale pattern morphology with predictions made by various models. Previous attempts have employed spectral analysis on pattern images and inverse modeling on one-dimensional transects of patterns images, yet have not made a concerted effort to rigorously confront predictions with observational data in two dimensions. This study makes the first steps towards unification, utilizing high resolution landscape-scale images of vegetation patterns over multiple years at five different locations, including Niger, Central Mexico, Baja California, Texas, and Australia. Initial analyses of the observed patterns reveal considerable departures from the idealized morphologies predicted by models. Pattern wavelengths, while clustered around a local average, vary through space and are frequently altered by pattern defects such as missing or broken bands. While often locally homogeneous, pattern orientation also varies through space, allowing the correlations between landscape features and changes in local pattern morphology to be explored. Stationarity of the pattern can then be examined by comparing temporal changes in morphology with local climatic fluctuations. Ultimately, by identifying homogeneous regions of coherent pattern, inversion approaches can be applied to infer model parameters and build links between observable pattern and landscape features and the

  3. Parameterization of a rainfall-runoff model based on the utility of the forecasts for a specific stakeholder

    Science.gov (United States)

    Cappelletti, Matteo; Toth, Elena

    2016-04-01

    The work presents the application of a new method for calibration of an hydrological rainfall-runoff model, based on the use of utility functions. The utility function is defined on the basis of the specific purpose of the desired predictions, according to the needs of the stakeholders that will use them: in the present case, the purpose is the identification of the future streamflow occurrences that will surpass an assigned threshold runoff, thus helping the stakeholder in the decisions concerning the issuance of flood watches and warnings in the operation of a flood forecasting system. The chosen utility function is based on both the absolute error of the model and the values of the observed streamflow. In addition to the parameterization developed using the utility function, in an application referred to a mid-sized mountain watershed in Tuscany (Italy), the model response was studied, as a term of comparison, also using traditional mono- and multi-objective calibration approaches. The results, evaluated also using skill scores based on false and missed alarms as well as on the probability of detection and frequency of hits of the threshold runoff (widely adopted when assessing the value of both meteorological and hydrological forecasts in real-world flood warning systems), demonstrate that the proposed approach may allow an improvement of the model performances, if compared with traditional mono-objective and multi-objective calibration procedures, in respect to the actual utility of the forecasts for a specific stakeholder.

  4. Internal efficiency of nutrient utilization: what is it and how to measure it during vegetative plant growth?

    Science.gov (United States)

    Santa-María, Guillermo E; Moriconi, Jorge I; Oliferuk, Sonia

    2015-06-01

    Efficient use of the resources required by plants to sustain crop production is considered an important objective in agriculture. In this context, the idea of developing crops with an enhanced ability to utilize mineral nutrients already taken up by roots has been proposed. In recent years powerful tools that allow the association of phenotypic variation with high-resolution genetic maps of crop plants have also emerged. To take advantage of these tools, accurate methods are needed to estimate the internal efficiency of nutrient utilization (ENU) at the whole-plant level, which requires using suitable conceptual and experimental approaches. Here we highlight some inconsistencies in the definitions of ENU commonly used for ENU 'phenotyping' at the vegetative stage and suggest that it would be convenient to adopt a dynamic definition. The idea that ENU should provide information about the relationship between carbon and mineral nutrient economies mainly during the period under which growth is actually affected by low internal nutrient concentration is here advocated as a guide for the selection of adequate operational ENU formulae for the vegetative stage. The desirability of using experimental approaches that allow removal of the influence of nutrient acquisition efficiency on ENU estimations is highlighted. It is proposed that the use of simulation models could help refine the conclusions obtained through these experimental procedures. Some potential limitations in breeding for high ENU are also considered. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. Theoretical modeling of a self-referenced dual mode SPR sensor utilizing indium tin oxide film

    Science.gov (United States)

    Srivastava, Sachin K.; Verma, Roli; Gupta, Banshi D.

    2016-06-01

    A prism based dual mode SPR sensor was theoretically modeled to work as a self-referenced sensor in spectral interrogation scheme. Self-referenced sensing was achieved by sandwiching an indium tin oxide thin film in between the prism base and the metal layer. The proposed sensor possesses two plasmon modes similar to long and short range SPRs (LR- and SR-SPRs) and we have analogically used LRSPR and SRSPR for them. However, these modes do not possess usual long range character due to the losses introduced by the imaginary part of indium tin oxide (ITO) dielectric function. One of the two plasmon modes responds to change in analyte refractive index while the other remains fixed. The influence of various design parameters on the performance of the sensor was evaluated. The performance of the proposed sensor was compared, via control simulations, with established dual mode geometries utilizing silicon dioxide (SiO2), Teflon AF-1600 and Cytop. The design parameters of the established geometries were optimized to obtain self-referenced sensing operation. Trade-offs between the resonance spectral width, minimum reflectivity, shift in resonance wavelength and angle of incidence were examined for optimal design. The present study will be useful in the fabrication of self-referenced sensors where the ambient conditions are not quite stable.

  6. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides.

    Science.gov (United States)

    Roy, Souvik; Nguyen, Thuy-Ai D; Gan, Lu; Jones, Anne K

    2015-09-07

    Two synthetic strategies for incorporating diiron analogues of [FeFe]-hydrogenases into short peptides via phosphine functional groups are described. First, utilizing the amine side chain of lysine as an anchor, phosphine carboxylic acids can be coupled via amide formation to resin-bound peptides. Second, artificial, phosphine-containing amino acids can be directly incorporated into peptides via solution phase peptide synthesis. The second approach is demonstrated using three amino acids each with a different phosphine substituent (diphenyl, diisopropyl, and diethyl phosphine). In total, five distinct monophosphine-substituted, diiron model complexes were prepared by reaction of the phosphine-peptides with diiron hexacarbonyl precursors, either (μ-pdt)Fe2(CO)6 or (μ-bdt)Fe2(CO)6 (pdt = propane-1,3-dithiolate, bdt = benzene-1,2-dithiolate). Formation of the complexes was confirmed by UV/Vis, FTIR and (31)P NMR spectroscopy. Electrocatalysis by these complexes is reported in the presence of acetic acid in mixed aqueous-organic solutions. Addition of water results in enhancement of the catalytic rates.

  7. Experienced Practitioners’ Beliefs Utilized to Create a Successful Massage Therapist Conceptual Model: a Qualitative Investigation

    Science.gov (United States)

    Kennedy, Anne B.; Munk, Niki

    2017-01-01

    Background The massage therapy profession in the United States has grown exponentially, with 35% of the profession’s practitioners in practice for three years or less. Investigating personal and social factors with regard to the massage therapy profession could help to identify constructs needed to be successful in the field. Purpose This data-gathering exercise explores massage therapists’ perceptions on what makes a successful massage therapist that will provide guidance for future research. Success is defined as supporting oneself and practice solely through massage therapy and related, revenue-generating field activity. Participants and Setting Ten successful massage therapy practitioners from around the United States who have a minimum of five years of experience. Research Design Semistructured qualitative interviews were used in an analytic induction framework; index cards with preidentified concepts printed on them were utilized to enhance conversation. An iterative process of interview coding and analysis was used to determine themes and subthemes. Results Based on the participants input, the categories in which therapists needed to be successful were organized into four main themes: effectively establish therapeutic relationships, develop massage therapy business acumen, seek valuable learning environments and opportunities, and cultivate strong social ties and networks. The four themes operate within specific contexts (e.g., regulation and licensing requirements in the therapists’ state), which may also influence the success of the massage therapist. Conclusions The model needs to be tested to explore which constructs explain variability in success and attrition rate. Limitations and future research implications are discussed. PMID:28690704

  8. A Cold Model Aerodynamical Test of Air-Staged Combustion in a Tangential Firing Utility Boiler

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hui-juan; HUI Shi-en; ZHOU Qu-lan

    2007-01-01

    The purpose of this paper is to present the flow field in the 300MW tangential firing utility boiler that used the Low NOx Concentric Firing System (LNCFS). Using the method of cold isothermal simulation ensures the geometric and boundary condition similarity. At the same time the condition of self-modeling is met. The experimental results show that the mixture of primary air and secondary air becomes slower, the average turbulence magnitude of the main combustion zone becomes less and the relative diameter of the tangential firing enlarges when the secondary air deflection angle increases. When the velocity pressure ratio of the secondary air to the primary air (p2/p1) enlarges, the mixture of the secondary air and the primary air becomes stronger, the average turbulence magnitude of the main combustion zone increases, and the relative diameter of the tangential firing becomes larger. Because the over fire air (OFA) laid out near the wall has a powerful penetration, the relative diameter of the tangential firing on the section of the OFA is very little, but the average turbulence magnitude is great. When the velocity pressure ratio of the OFA to the primary air pOFA/p1 increases, the relative diameter of the tangential firing on the section of the OFA grows little, the average turbulence magnitude becomes larger and the penetration of the OFA becomes more powerful.

  9. Research utilization in the building industry: decision model and preliminary assessment

    Energy Technology Data Exchange (ETDEWEB)

    Watts, R.L.; Johnson, D.R.; Smith, S.A.; Westergard, E.J.

    1985-10-01

    The Research Utilization Program was conceived as a far-reaching means for managing the interactions of the private sector and the federal research sector as they deal with energy conservation in buildings. The program emphasizes a private-public partnership in planning a research agenda and in applying the results of ongoing and completed research. The results of this task support the hypothesis that the transfer of R and D results to the buildings industry can be accomplished more efficiently and quickly by a systematic approach to technology transfer. This systematic approach involves targeting decision makers, assessing research and information needs, properly formating information, and then transmitting the information through trusted channels. The purpose of this report is to introduce elements of a market-oriented knowledge base, which would be useful to the Building Systems Division, the Office of Buildings and Community Systems and their associated laboratories in managing a private-public research partnership on a rational systematic basis. This report presents conceptual models and data bases that can be used in formulating a technology transfer strategy and in planning technology transfer programs.

  10. A Measure of Learning Model Complexity by VC Dimension

    Institute of Scientific and Technical Information of China (English)

    WANG Wen-jian; ZHANG Li-xia; XU Zong-ben

    2002-01-01

    When developing models there is always a trade-off between model complexity and model fit. In this paper, a measure of learning model complexity based on VC dimension is presented, and some relevant mathematical theory surrounding the derivation and use of this metric is summarized. The measure allows modelers to control the amount of error that is returned from a modeling system and to state upper bounds on the amount of error that the modeling system will return on all future, as yet unseen and uncollected data sets. It is possible for modelers to use the VC theory to determine which type of model more accurately represents a system.

  11. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  12. 2-Deoxyglucose incorporation into rat brain glycogen during measurement of local cerebral glucose utilization by the 2-deoxyglucose method

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, T.; Kaufman, E.E.; Sokoloff, L.

    1984-10-01

    The incorporation of 14C into glycogen in rat brain has been measured under the same conditions that exist during the measurement of local cerebral glucose utilization by the autoradiographic 2-(14C)deoxyglucose method. The results demonstrate that approximately 2% of the total 14C in brain 45 min after the pulse of 2-(14C)deoxyglucose is contained in the glycogen portion, and, in fact, incorporated into alpha-1-4 and alpha-1-6 deoxyglucosyl linkages. When the brain is removed by dissection, as is routinely done in the course of the procedure of the 2-(14C)deoxyglucose method to preserve the structure of the brain for autoradiography, the portion of total brain 14C contained in glycogen falls to less than 1%, presumably because of postmortem glycogenolysis which restores much of the label to deoxyglucose-phosphates. In any case, the incorporation of the 14C into glycogen is of no consequence to the validity of the autoradiographic deoxyglucose method, not because of its small magnitude, but because 2-(14C)deoxyglucose is incorporated into glycogen via (14C)deoxyglucose-6-phosphate, and the label in glycogen represents, therefore, an additional ''trapped'' product of deoxyglucose phosphorylation by hexokinase. With the autoradiographic 2-(14C)deoxyglucose method, in which only total 14C concentration in the brain tissue is measured by quantitative autoradiography, it is essential that all the labeled products derived directly or indirectly from (14C)deoxyglucose phosphorylation by hexokinase be retained in the tissue; their chemical identity is of no significance.

  13. Information support model and its impact on utility, satisfaction and loyalty of users

    Directory of Open Access Journals (Sweden)

    Sead Šadić

    2016-11-01

    Full Text Available In today’s modern age, information systems are of vital importance for successful performance of any organization. The most important role of any information system is its information support. This paper develops an information support model and presents the results of the survey examining the effects of such model. The survey was performed among the employees of Brčko District Government and comprised three phases. The first phase assesses the influence of the quality of information support and information on information support when making decisions. The second phase examines the impact of information support when making decisions on the perceived availability and user satisfaction with information support. The third phase examines the effects of perceived usefulness as well as information support satisfaction on user loyalty. The model is presented using six hypotheses, which were tested by means of a multivariate regression analysis. The demonstrated model shows that the quality of information support and information is of vital importance in the decision-making process. The perceived usefulness and customer satisfaction are of vital importance for continuous usage of information support. The model is universal, and if slightly modified, it can be used in any sphere of life where satisfaction is measured for clients and users of some service.

  14. Optimal parametric modelling of measured short waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    The spectral analysis of measured short waves can efficiently be carried out by the fast Fourier transform technique. Even though many present techniques can be used for the simulation of time series waves, these may not provide accurate...

  15. Power Utility Maximization in an Exponential Lévy Model Without a Risk-free Asset

    Institute of Scientific and Technical Information of China (English)

    Qing Zhou

    2005-01-01

    We consider the problem of maximizing the expected power utility from terminal wealth in a market where logarithmic securities prices follow a Levy process. By Girsanov's theorem, we give explicit solutions for power utility of undiscounted terminal wealth in terms of the Levy-Khintchine triplet.

  16. Modelling and Measurements of MAST Neutron Emission

    OpenAIRE

    Klimek, Iwona

    2016-01-01

    Measurements of neutron emission from a fusion plasma can provide a wealth of information on the underlying temporal, spatial and energy distributions of reacting ions and how they are affected by a wide range of magneto-hydro-dynamic (MHD) instabilities. This thesis focuses on the interpretation of the experimental measurements recorded by neutron flux monitors with and without spectroscopic capabilities installed on the Mega Ampere Spherical Tokamak (MAST). In particular, the temporally and...

  17. Measuring and modelling the structure of chocolate

    Science.gov (United States)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  18. Utilizing chromophoric dissolved organic matter measurements to derive export and reactivity of dissolved organic carbon exported to the Arctic Ocean: A case study of the Yukon River, Alaska

    Science.gov (United States)

    Spencer, R.G.M.; Aiken, G.R.; Butler, K.D.; Dornblaser, M.M.; Striegl, R.G.; Hernes, P.J.

    2009-01-01

    The quality and quantity of dissolved organic matter (DOM) exported by Arctic rivers is known to vary with hydrology and this exported material plays a fundamental role in the biogeochemical cycling of carbon at high latitudes. We highlight the potential of optical measurements to examine DOM quality across the hydrograph in Arctic rivers. Furthermore, we establish chromophoric DOM (CDOM) relationships to dissolved organic carbon (DOC) and lignin phenols in the Yukon River and model DOC and lignin loads from CDOM measurements, the former in excellent agreement with long-term DOC monitoring data. Intensive sampling across the historically under-sampled spring flush period highlights the importance of this time for total export of DOC and particularly lignin. Calculated riverine DOC loads to the Arctic Ocean show an increase from previous estimates, especially when new higher discharge data are incorporated. Increased DOC loads indicate decreased residence times for terrigenous DOM in the Arctic Ocean with important implications for the reactivity and export of this material to the Atlantic Ocean. Citation: Spencer, R. G. M., G. R. Aiken, K. D. Butler, M. M. Dornblaser, R. G. Striegl, and P. J. Hernes (2009), Utilizing chromophoric dissolved organic matter measurements to derive export and reactivity of dissolved organic carbon exported to the Arctic Ocean: A case study of the Yukon River, Alaska, Geophys. Res. Lett., 36, L06401, doi:10.1029/ 2008GL036831. Copyright 2009 by the American Geophysical Union.

  19. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  20. Heart Rate Variability Measures and Models

    CERN Document Server

    Teich, M C; Jost, B M; Vibe-Rheymer, K; Heneghan, C; Teich, Malvin C.; Lowen, Steven B.; Jost, Bradley M.; Vibe-Rheymer, Karin; Heneghan, Conor

    2001-01-01

    We focus on various measures of the fluctuations of the sequence of intervals between beats of the human heart, and how such fluctuations can be used to assess the presence or likelihood of cardiovascular disease. We examine sixteen such measures and their suitability for correctly classifying heartbeat records of various lengths as normal or revealing the presence of cardiac dysfunction, particularly congestive heart failure. Using receiver-operating-characteristic analysis we demonstrate that scale-dependent measures prove substantially superior to scale-independent ones. The wavelet-transform standard deviation at a scale near 32 heartbeat intervals, and its spectral counterpart near 1/32 cycles/interval, turn out to provide reliable results using heartbeat records just minutes long. We further establish for all subjects that the human heartbeat has an underlying stochastic origin rather than arising from a chaotic attractor. Finally, we develop a mathematical point process that emulates the human heartbea...

  1. Derivation of a new parametric impulse response matrix utilized for nodal wind load identification by response measurement.

    Science.gov (United States)

    Kazemi Amiri, A; Bucher, C

    2015-05-26

    This paper provides new formulations to derive the impulse response matrix, which is then used in the problem of load identification with application to wind induced vibration. The applied loads are inversely identified based on the measured structural responses by solving the associated discrete ill-posed problem. To this end - based on an existing parametric structural model - the impulse response functions of acceleration, velocity and displacement have been computed. Time discretization of convolution integral has been implemented according to an existing and a newly proposed procedure, which differ in the numerical integration methods. The former was evaluated based on a constant rectangular approximation of the sampled data and impulse response function in a number of steps corresponding to the sampling rate, while the latter interpolates the sampled data in an arbitrary number of sub-steps and then integrates over the sub-steps and steps. The identification procedure was implemented for a simulation example as well as an experimental laboratory case. The ill-conditioning of the impulse response matrix made it necessary to use Tikhonov regularization to recover the applied force from noise polluted measured response. The optimal regularization parameter has been obtained by L-curve and GCV method. The results of simulation represent good agreement between identified and measured force. In the experiments the identification results based on the measured displacement as well as acceleration are provided. Further it is shown that the accuracy of experimentally identified load depends on the sensitivity of measurement instruments over the different frequency ranges.

  2. Coherent acceptability measures in multiperiod models

    NARCIS (Netherlands)

    Roorda, Berend; Schumacher, Hans; Engwerda, Jacob

    2005-01-01

    The framework of coherent risk measures has been introduced by Artzner et al. (1999; Math. Finance 9, 203–228) in a single-period setting. Here, we investigate a similar framework in a multiperiod context. We add an axiom of dynamic consistency to the standard coherence axioms, and obtain a represen

  3. Coherent acceptability measures in multiperiod models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, Hans

    2004-01-01

    The framework of coherent risk measures has been introduced by Artzner et al. (1999) in a single-period setting. Here we investigate a similar framework in a multiperiod context. We add an axiom of dynamic consistency to the standard coherence axioms, and obtain a representation theorem in terms of

  4. Measuring equilibrium models: a multivariate approach

    Directory of Open Access Journals (Sweden)

    Nadji RAHMANIA

    2011-04-01

    Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.

  5. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...

  6. Prevention of radiation-induced salivary gland dysfunction utilizing a CDK inhibitor in a mouse model.

    Directory of Open Access Journals (Sweden)

    Katie L Martin

    Full Text Available BACKGROUND: Treatment of head and neck cancer with radiation often results in damage to surrounding normal tissues such as salivary glands. Permanent loss of function in the salivary glands often leads patients to discontinue treatment due to incapacitating side effects. It has previously been shown that IGF-1 suppresses radiation-induced apoptosis and enhances G2/M arrest leading to preservation of salivary gland function. In an effort to recapitulate the effects of IGF-1, as well as increase the likelihood of translating these findings to the clinic, the small molecule therapeutic Roscovitine, is being tested. Roscovitine is a cyclin-dependent kinase inhibitor that acts to transiently inhibit cell cycle progression and allow for DNA repair in damaged tissues. METHODOLOGY/PRINCIPAL FINDINGS: Treatment with Roscovitine prior to irradiation induced a significant increase in the percentage of cells in the G(2/M phase, as demonstrated by flow cytometry. In contrast, mice treated with radiation exhibit no differences in the percentage of cells in G(2/M when compared to unirradiated controls. Similar to previous studies utilizing IGF-1, pretreatment with Roscovitine leads to a significant up-regulation of p21 expression and a significant decrease in the number of PCNA positive cells. Radiation treatment leads to a significant increase in activated caspase-3 positive salivary acinar cells, which is suppressed by pretreatment with Roscovitine. Administration of Roscovitine prior to targeted head and neck irradiation preserves normal tissue function in mouse parotid salivary glands, both acutely and chronically, as measured by salivary output. CONCLUSIONS/SIGNIFICANCE: These studies suggest that induction of transient G(2/M cell cycle arrest by Roscovitine allows for suppression of apoptosis, thus preserving normal salivary function following targeted head and neck irradiation. This could have an important clinical impact by preventing the negative side

  7. Prevention of Radiation-Induced Salivary Gland Dysfunction Utilizing a CDK Inhibitor in a Mouse Model

    Science.gov (United States)

    Martin, Katie L.; Hill, Grace A.; Klein, Rob R.; Arnett, Deborah G.; Burd, Randy; Limesand, Kirsten H.

    2012-01-01

    Background Treatment of head and neck cancer with radiation often results in damage to surrounding normal tissues such as salivary glands. Permanent loss of function in the salivary glands often leads patients to discontinue treatment due to incapacitating side effects. It has previously been shown that IGF-1 suppresses radiation-induced apoptosis and enhances G2/M arrest leading to preservation of salivary gland function. In an effort to recapitulate the effects of IGF-1, as well as increase the likelihood of translating these findings to the clinic, the small molecule therapeutic Roscovitine, is being tested. Roscovitine is a cyclin-dependent kinase inhibitor that acts to transiently inhibit cell cycle progression and allow for DNA repair in damaged tissues. Methodology/Principal Findings Treatment with Roscovitine prior to irradiation induced a significant increase in the percentage of cells in the G2/M phase, as demonstrated by flow cytometry. In contrast, mice treated with radiation exhibit no differences in the percentage of cells in G2/M when compared to unirradiated controls. Similar to previous studies utilizing IGF-1, pretreatment with Roscovitine leads to a significant up-regulation of p21 expression and a significant decrease in the number of PCNA positive cells. Radiation treatment leads to a significant increase in activated caspase-3 positive salivary acinar cells, which is suppressed by pretreatment with Roscovitine. Administration of Roscovitine prior to targeted head and neck irradiation preserves normal tissue function in mouse parotid salivary glands, both acutely and chronically, as measured by salivary output. Conclusions/Significance These studies suggest that induction of transient G2/M cell cycle arrest by Roscovitine allows for suppression of apoptosis, thus preserving normal salivary function following targeted head and neck irradiation. This could have an important clinical impact by preventing the negative side effects of radiation

  8. Effects of Achieving Target Measures in Rheumatoid Arthritis on Functional Status, Quality of Life, and Resource Utilization: Analysis of Clinical Practice Data

    Science.gov (United States)

    Joo, Seongjung; Kawabata, Hugh; Al, Maiwenn J.; Allison, Paul D.; Rutten‐van Mölken, Maureen P. M. H.; Frits, Michelle L.; Iannaccone, Christine K.; Shadick, Nancy A.; Weinblatt, Michael E.

    2016-01-01

    Objective To evaluate associations between achieving guideline‐recommended targets of disease activity, defined by the Disease Activity Score in 28 joints using C‐reactive protein level (DAS28‐CRP) control for intraclass correlation and estimate effects of independent variables on outcomes of the modified Health Assessment Questionnaire (M‐HAQ), the EuroQol 5‐domain (EQ‐5D; a quality‐of‐life measure), hospitalization, and durable medical equipment (DME) use, we employed mixed models for continuous outcomes and generalized estimating equations for binary outcomes. Results Among 1,297 subjects, achievement (versus nonachievement) of recommended disease targets was associated with enhanced physical functioning and lower health resource utilization. After controlling for baseline covariates, achievement of disease targets (versus LDA) was associated with significantly enhanced physical functioning based on SDAI ≤3.3 (ΔM‐HAQ −0.047; P = 0.0100) and CDAI ≤2.8 (−0.073; P = 0.0003) but not DAS28‐CRP <2.6 (−0.022; P = 0.1735). Target attainment was associated with significantly improved EQ‐5D (0.022–0.096; P < 0.0030 versus LDA, MDA, or SDA). Patients achieving guideline‐recommended disease targets were 36–45% less likely to be hospitalized (P < 0.0500) and 23–45% less likely to utilize DME (P < 0.0100). Conclusion Attaining recommended target disease‐activity measures was associated with enhanced physical functioning and health‐related quality of life. Some health outcomes were similar in subjects attaining guideline targets versus LDA. Achieving LDA is a worthy clinical objective in some patients. PMID:26238974

  9. Overland flow : interfacing models with measurements

    NARCIS (Netherlands)

    Loon, van E.E.

    2002-01-01

    Index words: overland flow, catchment scale, system identification, ensemble simulations.This study presents new techniques to identify scale-dependent overland flow models and use these for ensemble-based predictions. The techniques are developed on the basis of overland flow, rain, discharge, soil

  10. Space Weather: Measurements, Models and Predictions

    Science.gov (United States)

    2014-03-21

    and record high levels of cosmic ray flux. There were broad-ranging terrestrial responses to this inactivity of the Sun. BC was involved in the...techniques for converting from one coordinate system (e.g., the invariant coordinate system used for the model) to another (e.g., the latitude- radius

  11. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases.

    Science.gov (United States)

    Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-05-01

    To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. Establishing an infrared measurement and modelling capability

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-04-01

    Full Text Available is supplemented with a considerable body of self-study material, tutorial assignments and laboratory demonstrations. A series of six significant experiments was used to demon- strate, in a practical manner, some future test scenarios and to reinforce... The fourth experiment investigated the applicability of the three imaging cameras? spectral ranges for temperature measurement of objects in the open sunlight. A secondary objective was to determine the true temperature and emissivity of the test targets...

  13. Numerical model based on amperometric measurements

    OpenAIRE

    Daungruthai Jarukanont; Imelda Bonifas Arredondo; Ricardo Femat; Garcia, Martin E.

    2015-01-01

    Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. ...

  14. Measurement Models for Reasoned Action Theory

    OpenAIRE

    Hennessy, Michael; Bleakley, Amy; FISHBEIN, MARTIN

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...

  15. A 3D modeling and measurement system for cultural heritage preservation

    Science.gov (United States)

    Du, Guoguang; Zhou, Mingquan; Ren, Pu; Shui, Wuyang; Zhou, Pengbo; Wu, Zhongke

    2015-07-01

    Cultural Heritage reflects the human production, life style and environmental conditions of various historical periods. It exists as one of the major national carriers of national history and culture. In order to do better protection and utilization for these cultural heritages, a system of three-dimensional (3D) reconstruction and statistical measurement is proposed in this paper. The system solves the problems of cultural heritage's data storage, measurement and analysis. Firstly, for the high precision modeling and measurement problems, range data registration and integration algorithm used to achieve high precision 3D reconstruction. Secondly, multi-view stereo reconstruction method is used to solve the problem of rapid reconstruction by procedures such as the original image data pre-processing, camera calibration, point cloud modeling. At last, the artifacts' measure underlying database is established by calculating the measurements of the 3D model's surface. These measurements contain Euclidean distance between the points on the surface, geodesic distance between the points, normal and curvature in each point, superficial area of a region, volume of model's part and some other measurements. These measurements provide a basis for carrying out information mining of cultural heritage. The system has been applied to the applications of 3D modeling, data measurement of the Terracotta Warriors relics, Tibetan architecture and some other relics.

  16. Evaluation Capacity Building in the Context of Military Psychological Health: Utilizing Preskill and Boyle's Multidisciplinary Model

    Science.gov (United States)

    Hilton, Lara; Libretto, Salvatore

    2017-01-01

    The need for evaluation capacity building (ECB) in military psychological health is apparent in light of the proliferation of newly developed, yet untested programs coupled with the lack of internal evaluation expertise. This study addresses these deficiencies by utilizing Preskill and Boyle's multidisciplinary ECB model within a post-traumatic…

  17. Fire rehabilitation decisions at landscape scales: utilizing state-and-transition models developed through disturbance response grouping of ecological sites

    Science.gov (United States)

    Recognizing the utility of ecological sites and the associated state-and-transition model (STM) for decision support, the Bureau of Land Management in Nevada partnered with Nevada NRCS and the University of Nevada, Reno (UNR) in 2009 with the goal of creating a team that could (1) expedite developme...

  18. Measuring Quality Satisfaction with Servqual Model

    Directory of Open Access Journals (Sweden)

    Dan Păuna

    2012-05-01

    Full Text Available The orientation to customer satisfaction is not a recent phenomenon, many very successfulbusinesspeople from the beginning of the 20th century, such as Sir Henry Royce, a name synonymous withRoll – Royce vehicles, stated the first principle regarding customer satisfaction “Our interest in the Roll-Royce cars does not end at the moment when the owner pays for and takes delivery the car. Our interest in thecar never wanes. Our ambition is that every purchaser of the Rolls - Royce car shall continue to be more thansatisfied (Rolls-Royce.” The following paper tries to deal with the important qualities of the concept for themeasuring of the gap between expected costumer services satisfactions, and perceived services like a routinecustomer feedback process, by means of a relatively new model, the Servqual model.

  19. Mesoscopic analysis of the utilization of hardening model for a description of softening behavior based on disturbed state concept theory

    Institute of Scientific and Technical Information of China (English)

    Jian-ye ZHENG; An-li WU

    2008-01-01

    Mesoscopic characteristics of a clayey soil specimen subjected to macroscoptc loading are examined usmg a medical-use computerized tomography(CT)instrument.Disturbed state concept(DSC)theory is based on the utilization of the hardening model.DSC indirectly describes rnaterial behavior by claiming that the actual response of the material is expressed in terms of the relative intact(RI)response and the fully adjusted(FA) response.The occurrence of mesoscopic structural changes of material has similarities with the occurrence of a macroscopic response of the material under loadings.In general,the relative changing value of a softening material iS three to five times more than that of a hardening material.Whether special zones exist or not in a specimen cross section does not affect the following conclusion:hardening material and softening material show mechanical dififerences with CT statistical indices values prominently changing,and the change is related to the superposmg of a disturbance factor.A new disturbance factor evolution function is proposed.Thus,mesoscopic statistical indices are introduced to describe macroscopic behavior through the new evolution function.An application of the new evolution function proves the effectiveness of the amalgamation of a macroscopic and a mesoscopic experimental phenomenon measurement methods.

  20. Evaluating neurology CME in two educational methods using Patton's utilization focused model.

    Science.gov (United States)

    Vakani, Farhan; Ahmad, Amina; Sonawalla, Aziz; Sheerani, Mughis

    2013-01-01

    Generally in continuing education medical education (CME) the most time is consumed for in the planning and preparation of the event. This planning and preparation, however, needs recognition through an evaluative process. The purpose of this study was to evaluate neurology CME in two educational methods, lecture vs task-based learning, using Patton's utilisation focused model. This was an observational, cross-sectional inquiry. The questionnaire evaluated the educational elements such as learning objectives met, content covered, presentations at the level of understanding, level of interaction, knowledge gained, time management, queries responded, organisation, quality of learning material and overall grading of the educational event. General Practitioners were the key participants in this evaluation and consisted of 60 self-selected physicians distributed equally in both the TBL and lecture groups. Patton's utilization focused model was used to produce findings for effective decision making. The data were analysed using Mann-Whitney U test to know the value of the learning method that satisfied the most participants. A total of 58 evaluations were returned, 29 from the TBL group and 29 from the lecture. The analysis of the elements showed higher mean ranks for TBL method ranging between 32.2 and 38.4 versus lecture (20.6-26.8). Most of the elements assessed were statistically significant (p > 0.05), except time management (p = 0.22). However, elements as 'objectives of the activity met' (p = 0.07), 'overall grading of the event' (p = 0.06) and 'presentations at the level of understanding' (p = 0.06) were at border line. Of the 29 respondents in the TBL group, 75% rated all the elements of the program above very good. In the lecture group, 22 (75%) respondents out of 29 rated almost half of the elements above very good. Majority of respondents in the TBL group rated all program elements as exceptional compared to the lecture group in which only half of the

  1. Comparison of the Beckmann model with bidirectional reflectance measurements.

    Science.gov (United States)

    Smith, T. F.; Hering, R. G.

    1973-01-01

    The Beckmann model is compared with recently reported bidirectional reflectance measurements. Comparisons revealed that monochromatic specular and bidirectional reflectance measurements are not adequately described by corresponding results evaluated from the model using mechanically acquired surface roughness parameters (rms height and rms slope). Significant improvement between measurements and predictions of the model is observed when optically acquired surface roughness parameters are used. Specular reflectance measurements for normal to intermediate polar angles of incidence are adequately represented by the model provided values of optical roughness multiplied by cosine of polar angle of incidence are less than 27 times average optical rms slope.

  2. Measurements and modeling of physical properties for oil and biomaterial refining

    OpenAIRE

    Zaitseva, Anna

    2014-01-01

    The aim of this thesis was to investigate a set of binary systems for developing necessary thermodynamic models for oil and biofuel industries. Extensive experimental work was performed for supplying the necessary vapor – liquid equilibria (VLE) and excess enthalpy data for the selected systems. Binary systems with C4 hydrocarbons + alkenes, alcohols and ketones were measured due to their importance in production of fuel additives. Static total pressure apparatus was utilized for these iso...

  3. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  4. Utilization of remote sensing data on meteorological and vegetation characteristics for modeling water and heat regimes of large agricultural region

    Science.gov (United States)

    Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena

    2016-04-01

    downloaded from LP DAAC web-site for the same vegetation seasons. The SEVIRI data have been used to retrieve P (every three hours and daily), Tls, E, Ta (at daylight and nighttime), LAI, and B (daily). All named technologies have been adapted to the territory of interest. To verify exactness of assessing AVHRR- and MODIS-based LST (Ts.eff, Ta and Tls) the error statistics of their derivation has been investigated for various samples using comparison with in-situ measurements during the all considered vegetation seasons. When developing the method to derive LST from the SEVIRI data its validation has been carried out through comparison of given Tls retrievals with independent collocated Tls estimates generated at LSA SAF (Lisbon, Portugal).The later check of SEVIRI-derived Tls and Ta estimates has been performed by their comparing with ground-based observation data. Correctness of LAI and B estimates has been confirmed when comparing time behavior of satellite- and ground-based LAI and B during each vegetation season. The all-important part of the study is to improve the developed Multi Threshold Method (MTM) intended for assessing daily and monthly rainfall from AVHRR and SEVIRI data, to check the correctness of carried out calculations for the considered territory and to develop procedures of utilizing obtained satellite-derived estimates of precipitation in the SVAT model. The MTM allows automatic pixel-by-pixel classifying AVHRR- and SEVIRI-measured data for the cloud detection, identification of its types, allocation of precipitation zones, and determination of instantaneous maximum intensities of precipitation in the pixel range around the clock throughout the year independently of land surface type. Measurement data from 5 AVHRR and 11 SEVIRI channels as well as their differences are used in the MTM as predictors. Calibration and verification of the MTM have been carried out using observation data on daily precipitation at agricultural meteorological stations of the

  5. Responses of Nitrogen Utilization and Apparent Nitrogen Loss to Different Control Measures in the Wheat and Maize Rotation System

    Science.gov (United States)

    Peng, Zhengping; Liu, Yanan; Li, Yingchun; Abawi, Yahya; Wang, Yanqun; Men, Mingxin; An-Vo, Duc-Anh

    2017-01-01

    Nitrogen (N) is an essential macronutrient for plant growth and excessive application rates can decrease crop yield and increase N loss into the environment. Field experiments were carried out to understand the effects of N fertilizers on N utilization, crop yield and net income in wheat and maize rotation system of the North China Plain (NCP). Compared to farmers’ N rate (FN), the yield of wheat and maize in reduction N rate by 21–24% based on FN (RN) was improved by 451 kg ha-1, N uptakes improved by 17 kg ha-1 and net income increased by 1671 CNY ha-1, while apparent N loss was reduced by 156 kg ha-1. The controlled-release fertilizer with a 20% reduction of RN (CRF80%), a 20% reduction of RN together with dicyandiamide (RN80%+DCD) and a 20% reduction of RN added with nano-carbon (RN80%+NC) all resulted in an improvement in crop yield and decreased the apparent N losses compared to RN. Contrasted with RN80%+NC, the total crop yield in RN80%+DCD improved by 1185 kg ha-1, N uptake enhanced by 9 kg ha-1 and net income increased by 3929 CNY ha-1, while apparent N loss was similar. Therefore, a 37–39% overall decrease in N rate compared to farmers plus the nitrification inhibitor, DCD, was effective N control measure that increased crop yields, enhanced N efficiencies, and improved economic benefits, while mitigating apparent N loss. There is considerable scope for improved N use effieincy in the intensive wheat -maize rotation of the NCP. PMID:28228772

  6. Utilization of similitude relationships to obtain a compact transformer model useful for technical and economical networks studies

    Energy Technology Data Exchange (ETDEWEB)

    Pierrat, L. [EDF-CNRS, Div. Technique Generale, Grenoble Cedex (France); Resende, M.J.; Santana, J. [IST-Seccao Maquinas Electricas e Elec. Potencia, Lisboa Codex (Portugal)

    1995-12-01

    Nowadays, utilities concentrate their efforts on a rationalized utilization of resources, in particular financial ones. On the electrical resources domain, investments are made on expensive and long life equipment, which means that must be taken into account specific long life characteristics some of them deterministic ones, and some with an high degree of uncertainty. A significant problem into this trend is the MV/LV distribution transformers renewal: the optimal choice of their rated power and renewal moment, depend upon consumers rate evolution and operation conditions. This paper proposes similitude relationships to define typical parameters present on thermal and, consequently, life expectancy models of distribution transformers. 8 refs, 6 figs, 2 tabs

  7. Measurement Models for Reasoned Action Theory.

    Science.gov (United States)

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-03-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  8. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China.

    Science.gov (United States)

    Zhou, Yu; Zhou, Leishan; Wang, Yun; Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved.

  9. Laser alignment measurement model with double beam

    Science.gov (United States)

    Mo, Changtao; Zhang, Lili; Hou, Xianglin; Wang, Ming; Lv, Jia; Du, Xin; He, Ping

    2012-10-01

    Double LD-Double PSD schedule.employ a symmetric structure and there are a laser and a PSD receiver on each axis. The Double LD-Double PSD is used, and the rectangular coordinate system is set up by use of the relationship of arbitrary two points coordinates, and then the parameter formula is deduced by the knowledge of solid geometry. Using the data acquisition system and the data processing model of laser alignment meter with double laser beam and two detector , basing on the installation parameter of the computer, we can have the state parameter between the two shafts by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated using the computer. This will instruct us to move the apparatus to align the shafts.

  10. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range......-liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl...

  11. A developmental examination of the psychometric properties and predictive utility of a revised psychological self-concept measure for preschool-age children.

    Science.gov (United States)

    Jia, Rongfang; Lang, Sarah N; Schoppe-Sullivan, Sarah J

    2016-02-01

    Accurate assessment of psychological self-concept in early childhood relies on the development of psychometrically sound instruments. From a developmental perspective, the current study revised an existing measure of young children's psychological self-concepts, the Child Self-View Questionnaire (CSVQ; Eder, 1990), and examined its psychometric properties using a sample of preschool-age children assessed at approximately 4 years old with a follow-up at age 5 (N = 111). The item compositions of lower order dimensions were revised, leading to improved internal consistency. Factor analysis revealed 3 latent psychological self-concept factors (i.e., sociability, control, and assurance) from the lower order dimensions. Measurement invariance by gender was supported for sociability and assurance, not for control. Test-retest reliability was supported by stability of the psychological self-concept measurement model during the preschool years, although some evidence of increasing differentiation was obtained. Validity of children's scores on the 3 latent psychological self-concept factors was tested by investigating their concurrent associations with teacher-reported behavioral adjustment on the Social Competence and Behavior Evaluation Scale-Short Form (SCBE-SF; LaFreniere & Dumas, 1996). Children who perceived themselves as higher in sociability at 5 years old displayed less internalizing behavior and more social competence; boys who perceived themselves as higher in control at age 4 exhibited lower externalizing behavior; children higher in assurance had greater social competence at age 4, but displayed more externalizing behavior at age 5. Implications relevant to the utility of the revised psychological self-concept measure are discussed. (PsycINFO Database Record

  12. Evaluating the cost utility of racecadotril for the treatment of acute watery diarrhea in children: the RAWD model

    Directory of Open Access Journals (Sweden)

    Rautenberg TA

    2012-04-01

    Full Text Available Tamlyn Anne Rautenberg1,2, Ute Zerwes1, Douglas Foerster3,4, Rick Aultman51Assessment in Medicine GmbH, Lörrach, Germany; 2Leeds Institute of Health Sciences, University of Leeds, Leeds, United Kingdom; 3Abbott Products Operations AG, Allschwil, Switzerland; 4University of Bielefeld, School of Public Health, Bielefeld, Germany; 5Semalytics, Arizona, United States of AmericaBackground: The safety and efficacy of racecadotril to treat acute watery diarrhea (AWD in children is well established, however its cost effectiveness for infants and children in Europe has not yet been determined.Objective: To evaluate the cost utility of racecadotril adjuvant with oral rehydration solution (ORS compared to ORS alone for the treatment of AWD in children younger than 5 years old. The analysis is performed from a United Kingdom National Health Service (NHS perspective.Methods: A decision tree model has been developed in Microsoft® Excel. The model is populated with the best available evidence. Deterministic and probabilistic sensitivity analyses (PSA have been performed. Health effects are measured as quality-adjusted life years (QALYs and the model output is cost (2011 GBP per QALY. The uncertainty in the primary outcome is explored by probabilistic analysis using 1000 iterations of a Monte Carlo simulation.Results: Deterministic analysis results in a total incremental cost of –£379 in favor of racecadotril and a total incremental QALY gain in favor of racecadotril of +0.0008. The observed cost savings with racecadotril arise from the reduction in primary care reconsultation and secondary referral. The difference in QALYs is largely attributable to the timely resolution of symptoms in the racecadotril arm. Racecadotril remains dominant when base case parameters are varied. Monte Carlo simulation and PSA confirm that racecadotril is the dominant treatment strategy and is almost certainly cost effective, under the central assumptions of the model, at a

  13. Utilizing ARC EMCS Seedling Cassettes as Highly Versatile Miniature Growth Chambers for Model Organism Experiments

    Science.gov (United States)

    Freeman, John L.; Steele, Marianne K.; Sun, Gwo-Shing; Heathcote, David; Reinsch, S.; DeSimone, Julia C.; Myers, Zachary A.

    2014-01-01

    The aim of our ground testing was to demonstrate the capability of safely putting specific model organisms into dehydrated stasis, and to later rehydrate and successfully grow them inside flight proven ARC EMCS seedling cassettes. The ARC EMCS seedling cassettes were originally developed to support seedling growth during space flight. The seeds are attached to a solid substrate, launched dry, and then rehydrated in a small volume of media on orbit to initiate the experiment. We hypothesized that the same seedling cassettes should be capable of acting as culture chambers for a wide range of organisms with minimal or no modification. The ability to safely preserve live organisms in a dehydrated state allows for on orbit experiments to be conducted at the best time for crew operations and more importantly provides a tightly controlled physiologically relevant growth experiment with specific environmental parameters. Thus, we performed a series of ground tests that involved growing the organisms, preparing them for dehydration on gridded Polyether Sulfone (PES) membranes, dry storage at ambient temperatures for varying periods of time, followed by rehydration. Inside the culture cassettes, the PES membranes were mounted above blotters containing dehydrated growth media. These were mounted on stainless steel bases and sealed with plastic covers that have permeable membrane covered ports for gas exchange. The results showed we were able to demonstrate acceptable normal growth of C.elegans (nematodes), E.coli (bacteria), S.cerevisiae (yeast), Polytrichum (moss) spores and protonemata, C.thalictroides (fern), D.discoideum (amoeba), and H.dujardini (tardigrades). All organisms showed acceptable growth and rehydration in both petri dishes and culture cassettes initially, and after various time lengths of dehydration. At the end of on orbit ISS European Modular Cultivation System experiments the cassettes could be frozen at ultra-low temperatures, refrigerated, or chemically

  14. Complete genome sequence, metabolic model construction and phenotypic characterization of Geobacillus LC300, an extremely thermophilic, fast growing, xylose-utilizing bacterium.

    Science.gov (United States)

    Cordova, Lauren T; Long, Christopher P; Venkataramanan, Keerthi P; Antoniewicz, Maciek R

    2015-11-01

    We have isolated a new extremely thermophilic fast-growing Geobacillus strain that can efficiently utilize xylose, glucose, mannose and galactose for cell growth. When grown aerobically at 72 °C, Geobacillus LC300 has a growth rate of 2.15 h(-1) on glucose and 1.52 h(-1) on xylose (doubling time less than 30 min). The corresponding specific glucose and xylose utilization rates are 5.55 g/g/h and 5.24 g/g/h, respectively. As such, Geobacillus LC300 grows 3-times faster than E. coli on glucose and xylose, and has a specific xylose utilization rate that is 3-times higher than the best metabolically engineered organism to date. To gain more insight into the metabolism of Geobacillus LC300 its genome was sequenced using PacBio's RS II single-molecule real-time (SMRT) sequencing platform and annotated using the RAST server. Based on the genome annotation and the measured biomass composition a core metabolic network model was constructed. To further demonstrate the biotechnological potential of this organism, Geobacillus LC300 was grown to high cell-densities in a fed-batch culture, where cells maintained a high xylose utilization rate under low dissolved oxygen concentrations. All of these characteristics make Geobacillus LC300 an attractive host for future metabolic engineering and biotechnology applications.

  15. Modeling, Measuring, and Compensating Color Weak Vision.

    Science.gov (United States)

    Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2016-06-01

    We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color-weak observers. The two main applications are the simulation of the perception of a color weak observer for a color-normal observer, and the compensation of color images in a way that a color-weak observer has approximately the same perception as a color-normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticeable-differences between the colors which are measured with the help of color-matching experiments. The constructed mappings are the isometries of Riemann spaces that preserve the perceived color differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared with the previously used methods, this method is free from approximation errors due to local linearizations, and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyze the variations of the Riemann metrics for different observers obtained from new color-matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential tests.

  16. Modelling, Measuring and Compensating Color Weak Vision.

    Science.gov (United States)

    Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2016-03-08

    We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests.

  17. Modeling, Measuring, and Compensating Color Weak Vision

    Science.gov (United States)

    Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2016-06-01

    We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests.

  18. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  19. Modeling and Measurements of CMUTs with Square Anisotropic Plates

    DEFF Research Database (Denmark)

    la Cour, Mette Funding; Christiansen, Thomas Lehrmann; Dahl-Petersen, Christian;

    2013-01-01

    The conventional method of modeling CMUTs use the isotropic plate equation to calculate the deflection, leading to deviations from FEM simulations including anisotropic effects of around 10% in center deflection. In this paper, the deflection is found for square plates using the full anisotropic...... plate equation and the Galerkin method. Utilizing the symmetry of the silicon crystal, a compact and accurate expression for the deflection can be obtained. The deviation from FEM in center deflection is

  20. Fifth generation lithospheric magnetic field model from CHAMP satellite measurements

    OpenAIRE

    Maus, S.; Hermann Lühr; Martin Rother; Hemant, K.; Balasis, G.; Patricia Ritter; Claudia Stolle

    2007-01-01

    Six years of low-orbit CHAMP satellite magnetic measurements have provided an exceptionally high-quality data resource for lithospheric magnetic field modeling and interpretation. Here we describe the fifth-generation satellite-only magnetic field model MF5. The model extends to spherical harmonic degree 100. As a result of careful data selection, extensive corrections, filtering, and line leveling, the model has low noise levels, even if evaluated at the Earth's surface. The model is particu...

  1. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  2. Review of the Reported Measures of Clinical Validity and Clinical Utility as Arguments for the Implementation of Pharmacogenetic Testing: A Case Study of Statin-Induced Muscle Toxicity

    Directory of Open Access Journals (Sweden)

    Marleen E. Jansen

    2017-08-01

    Full Text Available Advances from pharmacogenetics (PGx have not been implemented into health care to the expected extent. One gap that will be addressed in this study is a lack of reporting on clinical validity and clinical utility of PGx-tests. A systematic review of current reporting in scientific literature was conducted on publications addressing PGx in the context of statins and muscle toxicity. Eighty-nine publications were included and information was selected on reported measures of effect, arguments, and accompanying conclusions. Most authors report associations to quantify the relationship between a genetic variation an outcome, such as adverse drug responses. Conclusions on the implementation of a PGx-test are generally based on these associations, without explicit mention of other measures relevant to evaluate the test's clinical validity and clinical utility. To gain insight in the clinical impact and select useful tests, additional outcomes are needed to estimate the clinical validity and utility, such as cost-effectiveness.

  3. Information and complexity measures for hydrologic model evaluation

    Science.gov (United States)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  4. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    Science.gov (United States)

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS).

  5. Development of 3D statistical mandible models for cephalometric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il [School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Hong, Helen; Yoo, Ji Hyun [Division of Multimedia Engineering, Seoul Women' s University, Seoul (Korea, Republic of)

    2012-09-15

    The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.

  6. Gibbs measures and phase transitions in one-dimensional models

    OpenAIRE

    Mallak, Saed

    2000-01-01

    Ankara : Department of Mathematics and the Institute of Engineering and Sciences of Bilkent University, 2000. Thesis (Ph.D.) -- Bilkent University, 2000. Includes bibliographical references leaves 63-64 In this thesis we study the problem of limit Gibbs measures in one-dimensional models. VVe investigate uniqueness conditions for the limit Gibbs measures for one-dimensional models. VVe construct a one-dimensional model disproving a uniqueness conjecture formulated before for...

  7. Modelled and measured energy exchange at a snow surface

    Science.gov (United States)

    Halberstam, I.

    1979-01-01

    Results of a model developed at JPL for the energy interchange between the atmosphere and the snow are compared with measurements made over a snowfield during a warm period of March, 1978. Both model and measurements show that turbulent fluxes are considerably smaller than the radiative fluxes, especially during the day. The computation of turbulent fluxes for both model and data is apparently lacking because of problems inherent in the stable atmosphere.

  8. Accuracy and Validation of Measured and Modeled Data for Distributed PV Interconnection and Control

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Emma; Kiliccote, Sila; Arnold, Daniel; von Meier, Alexandra; Arghandeh, R.

    2015-07-27

    The distribution grid is changing to become an active resource with complex modeling needs. The new active distribution grid will, within the next ten years, contain a complex mix of load, generation, storage and automated resources all operating with different objectives on different time scales from each other and requiring detailed analysis. Electrical analysis tools that are used to perform capacity and stability studies have been used for transmission system planning for many years. In these tools, the distribution grid was considered a load and its details and physical components were not modeled. The increase in measured data sources can be utilized for better modeling, but also control of distributed energy resources (DER). The utilization of these sources and advanced modeling tools will require data management, and knowledgeable users. Each of these measurement and modeling devices have accuracy constraints, which will ultimately define their future ability to be planned and controlled. This paper discusses the importance of measured data accuracy for inverter control, interconnection and planning tools and proposes ranges of control accuracy needed to satisfy all concerns based on the present grid infrastructure.

  9. A Bayesian model for repeated measures zero-inflated count data with application to outpatient psychiatric service use

    Science.gov (United States)

    Neelon, Brian H.; O’Malley, A. James; Normand, Sharon-Lise T.

    2009-01-01

    In applications involving count data, it is common to encounter an excess number of zeros. In the study of outpatient service utilization, for example, the number of utilization days will take on integer values, with many subjects having no utilization (zero values). Mixed-distribution models, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB), are often used to fit such data. A more general class of mixture models, called hurdle models, can be used to model zero-deflation as well as zero-inflation. Several authors have proposed frequentist approaches to fitting zero-inflated models for repeated measures. We describe a practical Bayesian approach which incorporates prior information, has optimal small-sample properties, and allows for tractable inference. The approach can be easily implemented using standard Bayesian software. A study of psychiatric outpatient service use illustrates the methods. PMID:21339863

  10. Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation

    Science.gov (United States)

    2009-11-12

    2007 to 29-11-2008 4. TITLE AND SUBTITLE Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation 5a. CONTRACT NUMBER...Multilevel Modeling Portal (www.ats.ucla.edu/stat/ mlm /) and the Web site of the Center for Multilevel Modeling (http://multilevel.ioe.ac.uk/index.html

  11. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H.M.; Veltink, P.H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  12. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H. Martin; Veltink, Petrus H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  13. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  14. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-01-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive…

  15. Creating a Test Validated Structural Dynamic Finite Element Model of the Multi-Utility Technology Test Bed Aircraft

    Science.gov (United States)

    Pak, Chan-Gi; Truong, Samson S.

    2014-01-01

    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.

  16. Estimation of a valuation function for a diabetes mellitus-specific preference-based measure of health: the Diabetes Utility Index.

    Science.gov (United States)

    Sundaram, Murali; Smith, Michael J; Revicki, Dennis A; Miller, Lesley-Ann; Madhavan, Suresh; Hobbs, Gerry

    2010-01-01

    Preference-based measures of health (PBMH) provide 'preference' or 'utility' weights that enable the calculation of QALYs for the economic evaluations of interventions. The Diabetes Utility Index (DUI) was developed as a brief, self-administered, diabetes mellitus-specific PBMH that can efficiently estimate patient-derived health state utilities. To describe the development of the valuation function for the DUI, and to report the validation results of the valuation function. Multi-Attribute Utility Theory (MAUT) was used as the framework to develop a valuation function for the DUI. Twenty of 768 possible health states of the DUI classified as anchor states, single-attribute level states including corner states, and marker states were selected and described for preference elicitation interviews. Visual analogue scale and standard gamble (SG) exercises were used to measure preferences from individuals with diabetes recruited from primary care and community settings in and around Morgantown, WV, USA for the 20 health states defined by combinations of DUI attributes and severity levels. Data collected in the interviews were used to develop a valuation function that calculates utilities for the DUI health states and calculates attribute-level utilities. A validation survey of the valuation function was conducted in collaboration with the West Virginia University (WVU) Diabetes Institute. A total of 100 individuals with diabetes were interviewed and their preferences for various DUI health states measured. From data generated in the interviews, a DUI valuation function was developed on a scale where 1.00 = perfect health (PH) and 0.00 = the all worse 'pits' state, and adjusted to yield utilities on the conventional scale 1.00 = PH and 0.00 = dead. A total of 396 patients with diabetes who received care at WVU clinics completed a DUI mail validation survey (response rate = 33%). Clinical data consisting of International Classification of Diseases, 9th edition, diagnosis

  17. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  18. Advancing the extended parallel process model through the inclusion of response cost measures.

    Science.gov (United States)

    Rintamaki, Lance S; Yang, Z Janet

    2014-01-01

    This study advances the Extended Parallel Process Model through the inclusion of response cost measures, which are drawbacks associated with a proposed response to a health threat. A sample of 502 college students completed a questionnaire on perceptions regarding sexually transmitted infections and condom use after reading information from the Centers for Disease Control and Prevention on the health risks of sexually transmitted infections and the utility of latex condoms in preventing sexually transmitted infection transmission. The questionnaire included standard Extended Parallel Process Model assessments of perceived threat and efficacy, as well as questions pertaining to response costs associated with condom use. Results from hierarchical ordinary least squares regression demonstrated how the addition of response cost measures improved the predictive power of the Extended Parallel Process Model, supporting the inclusion of this variable in the model.

  19. The utility of using customized heterochromatic flicker photometry (cHFP) to measure macular pigment in patients with age-related macular degeneration

    OpenAIRE

    Stringham, J.M; Hammond, BR; Nolan, John,; Wooten, BR; Mammen, A.; Smollen, W

    2008-01-01

    The purpose of this study was to assess the utility and validity of using customized heterochromatic flicker photometry (cHFP) to measure macular pigment optical density (MPOD) in patients with intermediate stages of age-related macular degeneration (AMD). The measurement procedure was optimized to accommodate individual differences in temporal vision related to age, disease, or other factors. The validity criteria were based on the similarity of the spectral absorption curves to ex vivo curv...

  20. Utilizing Statistical Semantic Similarity Techniques for Ontology Mapping——with Applications to AEC Standard Models

    Institute of Scientific and Technical Information of China (English)

    Pan Jiayi; Chin-Pang Jack Cheng; Gloria T. Lau; Kincho H. Law

    2008-01-01

    The objective of this paper is to introduce three semi-automated approaches for ontology mapping using relatedness analysis techniques. In the architecture, engineering, and construction (AEC) industry, there exist a number of ontological standards to describe the semantics of building models. Although the standards share similar scopes of interest, the task of comparing and mapping concepts among standards is challenging due to their differences in terminologies and perspectives. Ontology mapping is therefore necessary to achieve information interoperability, which allows two or more information sources to exchange data and to re-use the data for further purposes. The attribute-based approach, corpus-based approach, and name-based approach presented in this paper adopt the statistical relatedness analysis techniques to discover related concepts from heterogeneous ontologies. A pilot study is conducted on IFC and CIS/2 ontologies to evaluate the approaches. Preliminary results show that the attribute-based approach outperforms the other two approaches in terms of precision and F-measure.