WorldWideScience

Sample records for model utilized measures

  1. [Home health resource utilization measures using a case-mix adjustor model].

    Science.gov (United States)

    You, Sun-Ju; Chang, Hyun-Sook

    2005-08-01

    The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.

  2. Measurement of utility.

    Science.gov (United States)

    Thavorncharoensap, Montarat

    2014-05-01

    The Quality Adjusted Life Year (QALY) is the most widely recommended health outcome measure for use in economic evaluations. The QALY gives a value to the effect of a given health intervention in terms of both quantity and quality. QALYs are calculated by multiplying the duration of time spent in a given health state, in years, by the quality of life weighted, known as utility. Utility can range from 0 (the worst health state-the equivalent of death) to 1 (the best health state-full health). This paper provides an overview of the various methods that can be used to measure utility and outlines the recommended protocol for measuring utility, as described in the Guidelines for Health Technology Assessment in Thailand (second edition). The recommendations are as follows: Wherever possible, primary data collection using EQ-5D-3L in patients using Thai value sets generated from the general public should be used. Where the EQ-5D-3L is considered inappropriate, other methods such as Standard gamble (SG), Time-trade-off (TTO), Visual analogue scale (VAS), Health Utilities Index (HUI), SF-6D, or Quality of well being (QWB) can be used. However, justification and full details on the chosen instrument should always be provided.

  3. Deriving the expected utility of a predictive model when the utilities are uncertain.

    Science.gov (United States)

    Cooper, Gregory F; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.

  4. A sequential model for the structure of health care utilization.

    NARCIS (Netherlands)

    Herrmann, W.J.; Haarmann, A.; Baerheim, A.

    2017-01-01

    Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic

  5. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  6. Resolving inconsistencies in utility measurement under risk: Tests of generalizations of expected utility

    OpenAIRE

    Han Bleichrodt; José María Abellán-Perpiñan; JoséLuis Pinto; Ildefonso Méndez-Martínez

    2005-01-01

    This paper explores inconsistencies that occur in utility measurement under risk when expected utility theory is assumed and the contribution that prospect theory and some other generalizations of expected utility can make to the resolution of these inconsistencies. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studied, prospect theory was the most consistent with our data. The main improvement of prospect theory over expe...

  7. A utility-theoretic model for QALYs and willingness to pay.

    Science.gov (United States)

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  8. Beyond Bentham – Measuring Procedural Utility

    OpenAIRE

    Bruno S. Frey; Alois Stutzer

    2001-01-01

    We propose that outcome utility and process utility can be distinguished and empirically measured. People gain procedural utility from participating in the political decision-making process itself, irrespective of the outcome. Nationals enjoy both outcome and process utility, while foreigners are excluded from political decision-making and therefore cannot enjoy the corresponding procedural utility. Utility is measured by individuals’ reported subjective well-being or happiness. We find tha...

  9. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  10. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  11. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  12. Clinical utility of measures of breathlessness.

    Science.gov (United States)

    Cullen, Deborah L; Rodak, Bernadette

    2002-09-01

    The clinical utility of measures of dyspnea has been debated in the health care community. Although breathlessness can be evaluated with various instruments, the most effective dyspnea measurement tool for patients with chronic lung disease or for measuring treatment effectiveness remains uncertain. Understanding the evidence for the validity and reliability of these instruments may provide a basis for appropriate clinical application. Evaluate instruments designed to measure breathlessness, either as single-symptom or multidimensional instruments, based on psychometrics foundations such as validity, reliability, and discriminative and evaluative properties. Classification of each dyspnea measurement instrument will recommend clinical application in terms of exercise, benchmarking patients, activities of daily living, patient outcomes, clinical trials, and responsiveness to treatment. Eleven dyspnea measurement instruments were selected. Each instrument was assessed as discriminative or evaluative and then analyzed as to its psychometric properties and purpose of design. Descriptive data from all studies were described according to their primary patient application (ie, chronic obstructive pulmonary disease, asthma, or other patient populations). The Borg Scale and the Visual Analogue Scale are applicable to exertion and thus can be applied to any cardiopulmonary patient to determine dyspnea. All other measures were determined appropriate for chronic obstructive pulmonary disease, whereas the Shortness of Breath Questionnaire can be applied to cystic fibrosis and lung transplant patients. The most appropriate utility for all instruments was measuring the effects on activities of daily living and for benchmarking patient progress. Instruments that quantify function and health-related quality of life have great utility for documenting outcomes but may be limited as to documenting treatment responsiveness in terms of clinically important changes. The dyspnea

  13. An absolute scale for measuring the utility of money

    Science.gov (United States)

    Thomas, P. J.

    2010-07-01

    Measurement of the utility of money is essential in the insurance industry, for prioritising public spending schemes and for the evaluation of decisions on protection systems in high-hazard industries. Up to this time, however, there has been no universally agreed measure for the utility of money, with many utility functions being in common use. In this paper, we shall derive a single family of utility functions, which have risk-aversion as the only free parameter. The fact that they return a utility of zero at their low, reference datum, either the utility of no money or of one unit of money, irrespective of the value of risk-aversion used, qualifies them to be regarded as absolute scales for the utility of money. Evidence of validation for the concept will be offered based on inferential measurements of risk-aversion, using diverse measurement data.

  14. Simultaneous measurement of glucose transport and utilization in the human brain

    Science.gov (United States)

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMRglc) obtained from other tracer studies, such as 13C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state 1H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMRglc, this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain. PMID:21791622

  15. Insider Models with Finite Utility in Markets with Jumps

    International Nuclear Information System (INIS)

    Kohatsu-Higa, Arturo; Yamazato, Makoto

    2011-01-01

    In this article we consider, under a Lévy process model for the stock price, the utility optimization problem for an insider agent whose additional information is the final price of the stock blurred with an additional independent noise which vanishes as the final time approaches. Our main interest is establishing conditions under which the utility of the insider is finite. Mathematically, the problem entails the study of a “progressive” enlargement of filtration with respect to random measures. We study the jump structure of the process which leads to the conclusion that in most cases the utility of the insider is finite and his optimal portfolio is bounded. This can be explained financially by the high risks involved in models with jumps.

  16. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  17. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  18. Additive conjoint measurement for multiattribute utility

    NARCIS (Netherlands)

    Maas, A.; Wakker, P.P.

    1994-01-01

    This paper shows that multiattribute utility can be simplified by methods from additive conjoint measurement. Given additive conjoint measurability under certainty, axiomatizations can be simplified, and implementation and reliability of elicitation can be improved. This also contributes to the

  19. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... instruments assess level of personality impairment and pathological traits. Objective. A number of studies have supported the psychometric qualities of the LPFS and the PID-5, but the utility of these instruments in clinical assessment and treatment has not been extensively evaluated. The goal of this study...... was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected...

  20. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  1. The predictive validity of prospect theory versus expected utility in health utility measurement.

    Science.gov (United States)

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  2. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  3. Measuring the potential utility of seasonal climate predictions

    Science.gov (United States)

    Tippett, Michael K.; Kleeman, Richard; Tang, Youmin

    2004-11-01

    Variation of sea surface temperature (SST) on seasonal-to-interannual time-scales leads to changes in seasonal weather statistics and seasonal climate anomalies. Relative entropy, an information theory measure of utility, is used to quantify the impact of SST variations on seasonal precipitation compared to natural variability. An ensemble of general circulation model (GCM) simulations is used to estimate this quantity in three regions where tropical SST has a large impact on precipitation: South Florida, the Nordeste of Brazil and Kenya. We find the yearly variation of relative entropy is strongly correlated with shifts in ensemble mean precipitation and weakly correlated with ensemble variance. Relative entropy is also found to be related to measures of the ability of the GCM to reproduce observations.

  4. The utility target market model

    International Nuclear Information System (INIS)

    Leng, G.J.; Martin, J.

    1994-01-01

    A new model (the Utility Target Market Model) is used to evaluate the economic benefits of photovoltaic (PV) power systems located at the electrical utility customer site. These distributed PV demand-side generation systems can be evaluated in a similar manner to other demand-side management technologies. The energy and capacity values of an actual PV system located in the service area of the New England Electrical System (NEES) are the two utility benefits evaluated. The annual stream of energy and capacity benefits calculated for the utility are converted to the installed cost per watt that the utility should be willing to invest to receive this benefit stream. Different discount rates are used to show the sensitivity of the allowable installed cost of the PV systems to a utility's average cost of capital. Capturing both the energy and capacity benefits of these relatively environmentally friendly distributed generators, NEES should be willing to invest in this technology when the installed cost per watt declines to ca $2.40 using NEES' rated cost of capital (8.78%). If a social discount rate of 3% is used, installation should be considered when installed cost approaches $4.70/W. Since recent installations in the Sacramento Municipal Utility District have cost between $7-8/W, cost-effective utility applications of PV are close. 22 refs., 1 fig., 2 tabs

  5. Rank dependent expected utility models of tax evasion.

    OpenAIRE

    Erling Eide

    2001-01-01

    In this paper the rank-dependent expected utility theory is substituted for the expected utility theory in models of tax evasion. It is demonstrated that the comparative statics results of the expected utility, portfolio choice model of tax evasion carry over to the more general rank dependent expected utility model.

  6. Direct estimates of unemployment rate and capacity utilization in macroeconometric models

    Energy Technology Data Exchange (ETDEWEB)

    Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V

    1979-10-01

    The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.

  7. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  8. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  9. Multiattribute health utility scoring for the computerized adaptive measure CAT-5D-QOL was developed and validated.

    Science.gov (United States)

    Kopec, Jacek A; Sayre, Eric C; Rogers, Pamela; Davis, Aileen M; Badley, Elizabeth M; Anis, Aslam H; Abrahamowicz, Michal; Russell, Lara; Rahman, Md Mushfiqur; Esdaile, John M

    2015-10-01

    The CAT-5D-QOL is a previously reported item response theory (IRT)-based computerized adaptive tool to measure five domains (attributes) of health-related quality of life. The objective of this study was to develop and validate a multiattribute health utility (MAHU) scoring method for this instrument. The MAHU scoring system was developed in two stages. In phase I, we obtained standard gamble (SG) utilities for 75 hypothetical health states in which only one domain varied (15 states per domain). In phase II, we obtained SG utilities for 256 multiattribute states. We fit a multiplicative regression model to predict SG utilities from the five IRT domain scores. The prediction model was constrained using data from phase I. We validated MAHU scores by comparing them with the Health Utilities Index Mark 3 (HUI3) and directly measured utilities and by assessing between-group discrimination. MAHU scores have a theoretical range from -0.842 to 1. In the validation study, the scores were, on average, higher than HUI3 utilities and lower than directly measured SG utilities. MAHU scores correlated strongly with the HUI3 (Spearman ρ = 0.78) and discriminated well between groups expected to differ in health status. Results reported here provide initial evidence supporting the validity of the MAHU scoring system for the CAT-5D-QOL. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A psychometric evaluation of the Swedish version of the Research Utilization Questionnaire using a Rasch measurement model.

    Science.gov (United States)

    Lundberg, Veronica; Boström, Anne-Marie; Malinowsky, Camilla

    2017-07-30

    Evidence-based practice and research utilisation has become a commonly used concept in health care. The Research Utilization Questionnaire (RUQ) has been recognised to be a widely used instrument measuring the perception of research utilisation among nursing staff in clinical practice. Few studies have however analysed the psychometric properties of the RUQ. The aim of this study was to examine the psychometric properties of the Swedish version of the three subscales in RUQ using a Rasch measurement model. This study has a cross-sectional design using a sample of 163 staff (response rate 81%) working in one nursing home in Sweden. Data were collected using the Swedish version of RUQ in 2012. The three subscales Attitudes towards research, Availability of and support for research use and Use of research findings in clinical practice were investigated. Data were analysed using a Rasch measurement model. The results indicate presence of multidimensionality in all subscales. Moreover, internal scale validity and person response validity also provide some less satisfactory results, especially for the subscale Use of research findings. Overall, there seems to be a problem with the negatively worded statements. The findings suggest that clarification and refining of items, including additional psychometric evaluation of the RUQ, are needed before using the instrument in clinical practice and research studies among staff in nursing homes. © 2017 Nordic College of Caring Science.

  11. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  12. RDT&E Laboratory Capacity Utilization and Productivity Measurement Methods for Financial Decision-Making within DON

    National Research Council Canada - National Science Library

    Haupt, Jeffrey

    1998-01-01

    .... Industry capacity utilization and productivity measurement techniques and models were evaluated for their potential application to the Naval Air Warfare Center Aircraft Division (NAWCAD) RDT&E organization...

  13. A note on additive risk measures in rank-dependent utility

    NARCIS (Netherlands)

    Goovaerts, M.J.; Kaas, R.; Laeven, R.J.A.

    2010-01-01

    This note proves that risk measures obtained by applying the equivalent utility principle in rank-dependent utility are additive if and only if the utility function is linear or exponential and the probability weighting (distortion) function is the identity.

  14. New Energy Utility Business Models

    International Nuclear Information System (INIS)

    Potocnik, V.

    2016-01-01

    Recently a lot of big changes happened in the power sector: energy efficiency and renewable energy sources are quickly progressing, distributed or decentralised generation of electricity is expanding, climate change requires reduction of greenhouse gas emissions and price volatility and incertitude of fossil fuel supply is common. Those changes have led to obsolescence of vertically integrated business models which have dominated in energy utility organisations for a hundred years and new business models are being introduced. Those models take into account current changes in the power sector and enable a wider application of energy efficiency and renewable energy sources, especially for consumers, with the decentralisation of electricity generation and complying with the requirements of climate and environment preservation. New business models also solve the questions of financial compensations for utilities because of the reduction of centralised energy generation while contributing to local development and employment.(author).

  15. Utility measurement in healthcare: the things I never got to.

    Science.gov (United States)

    Torrance, George W

    2006-01-01

    The present article provides a brief historical background on the development of utility measurement and cost-utility analysis in healthcare. It then outlines a number of research ideas in this field that the author never got to. The first idea is extremely fundamental. Why is health economics the only application of economics that does not use the discipline of economics? And, more importantly, what discipline should it use? Research ideas are discussed to investigate precisely the underlying theory and axiom systems of both Paretian welfare economics and the decision-theoretical utility approach. Can the two approaches be integrated or modified in some appropriate way so that they better reflect the needs of the health field? The investigation is described both for the individual and societal levels. Constructing a 'Robinson Crusoe' society of only a few individuals with different health needs, preferences and willingness to pay is suggested as a method for gaining insight into the problem. The second idea concerns the interval property of utilities and, therefore, QALYs. It specifically concerns the important requirement that changes of equal magnitude anywhere on the utility scale, or alternatively on the QALY scale, should be equally desirable. Unfortunately, one of the original restrictions on utility theory states that such comparisons are not permitted by the theory. It is shown, in an important new finding, that while this restriction applies in a world of certainty, it does not in a world of uncertainty, such as healthcare. Further research is suggested to investigate this property under both certainty and uncertainty. Other research ideas that are described include: the development of a precise axiomatic basis for the time trade-off method; the investigation of chaining as a method of preference measurement with the standard gamble or time trade-off; the development and training of a representative panel of the general public to improve the completeness

  16. An approach for evaluating utility-financed energy conservation programs. The economic welfare model

    Energy Technology Data Exchange (ETDEWEB)

    Costello, K W; Galen, P S

    1985-09-01

    The main objective of this paper is to illustrate how the economic welfare model may be used to measure the economic efficiency effects of utility-financed energy conservation programs. The economic welfare model is the theoretical structure that was used in this paper to develop a cost/benefit test. This test defines the net benefit of a conservation program as the change in the sum of consumer and producer surplus. The authors advocate the operation of the proposed cost/benefit model as a screening tool to eliminate from more detailed review those programs where the expected net benefits are less than zero. The paper presents estimates of the net benefit derived from different specified cost/benefit models for four illustrative pilot programs. These models are representative of those which have been applied or are under review by utilities and public utility commissions. From the numerical results, it is shown that net benefit is greatly affected by the assumptions made about the nature of welfare gains to program participants. The main conclusion that emerges from the numerical results is that the selection of a cost/benefit model is a crucial element in evaluating utility-financed energy conservation programs. The paper also briefly addresses some of the major unresolved issues in utility-financed energy conservation programs. 2 figs., 3 tabs., 10 refs. (A.V.)

  17. Simultaneous measurement of glucose transport and utilization in the human brain

    OpenAIRE

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.; Öz, Gülin

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose tra...

  18. Modeling strategy to identify patients with primary immunodeficiency utilizing risk management and outcome measurement.

    Science.gov (United States)

    Modell, Vicki; Quinn, Jessica; Ginsberg, Grant; Gladue, Ron; Orange, Jordan; Modell, Fred

    2017-06-01

    This study seeks to generate analytic insights into risk management and probability of an identifiable primary immunodeficiency defect. The Jeffrey Modell Centers Network database, Jeffrey Modell Foundation's 10 Warning Signs, the 4 Stages of Testing Algorithm, physician-reported clinical outcomes, programs of physician education and public awareness, the SPIRIT® Analyzer, and newborn screening, taken together, generates P values of less than 0.05%. This indicates that the data results do not occur by chance, and that there is a better than 95% probability that the data are valid. The objectives are to improve patients' quality of life, while generating significant reduction of costs. The advances of the world's experts aligned with these JMF programs can generate analytic insights as to risk management and probability of an identifiable primary immunodeficiency defect. This strategy reduces the uncertainties related to primary immunodeficiency risks, as we can screen, test, identify, and treat undiagnosed patients. We can also address regional differences and prevalence, age, gender, treatment modalities, and sites of care, as well as economic benefits. These tools support high net benefits, substantial financial savings, and significant reduction of costs. All stakeholders, including patients, clinicians, pharmaceutical companies, third party payers, and government healthcare agencies, must address the earliest possible precise diagnosis, appropriate intervention and treatment, as well as stringent control of healthcare costs through risk assessment and outcome measurement. An affected patient is entitled to nothing less, and stakeholders are responsible to utilize tools currently available. Implementation offers a significant challenge to the entire primary immunodeficiency community.

  19. Business model innovation for sustainable energy: German utilities and renewable energy

    International Nuclear Information System (INIS)

    Richter, Mario

    2013-01-01

    The electric power sector stands at the beginning of a fundamental transformation process towards a more sustainable production based on renewable energies. Consequently, electric utilities as incumbent actors face a massive challenge to find new ways of creating, delivering, and capturing value from renewable energy technologies. This study investigates utilities' business models for renewable energies by analyzing two generic business models based on a series of in-depth interviews with German utility managers. It is found that utilities have developed viable business models for large-scale utility-side renewable energy generation. At the same time, utilities lack adequate business models to commercialize small-scale customer-side renewable energy technologies. By combining the business model concept with innovation and organization theory practical recommendations for utility mangers and policy makers are derived. - Highlights: • The energy transition creates a fundamental business model challenge for utilities. • German utilities succeed in large-scale and fail in small-scale renewable generation. • Experiences from other industries are available to inform utility managers. • Business model innovation capabilities will be crucial to master the energy transition

  20. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    Science.gov (United States)

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Measuring the costs of photovoltaics in an electric utility planning framework

    International Nuclear Information System (INIS)

    Awerbuch, Shimon

    1993-01-01

    Utility planning models evaluate alternative generating options using the revenue requirements method-an engineering-oriented, discounted cash-flow (DCF) methodology that has been widely used for over three decades. Discounted cash-flow techniques were conceived in the context of active expense-intensive technologies, such as conventional, fuel-intensive power generation. Photovoltaic (PV) technology, by contrast, is passive and capital intensive-attributes that are similar to those of other new process technologies, such as computer-integrated manufacturing. Discounted cash-flow techniques have a dismal record for correctly valuing new technologies with these attributes, in part because their benefits cannot be easily measured using traditional accounting concepts. This paper examines how these issues affect cost measurement in both conventional and PV-based electricity, and presents kWh-cost estimates for three technologies (coal, gas and PV) using risk-adjusted approaches, which suggest that PV costs are generally equivalent to the gas/combined cycle and about twice the cost of base-load coal (environmental externalities are ignored). Finally, the paper evaluates independent power purchases for a typical US utility and finds that in such a setting the cost of PV-based power is comparable to the firm's published avoided costs. (author)

  2. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  3. Utilization of Multispectral Images for Meat Color Measurements

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Lindbjerg; Carstensen, Jens Michael

    2013-01-01

    This short paper describes how the use of multispectral imaging for color measurement can be utilized in an efficient and descriptive way for meat scientists. The basis of the study is meat color measurements performed with a multispectral imaging system as well as with a standard colorimeter...... of color and color variance than what is obtained by the standard colorimeter....

  4. Risk measures on networks and expected utility

    International Nuclear Information System (INIS)

    Cerqueti, Roy; Lupi, Claudio

    2016-01-01

    In reliability theory projects are usually evaluated in terms of their riskiness, and often decision under risk is intended as the one-shot-type binary choice of accepting or not accepting the risk. In this paper we elaborate on the concept of risk acceptance, and propose a theoretical framework based on network theory. In doing this, we deal with system reliability, where the interconnections among the random quantities involved in the decision process are explicitly taken into account. Furthermore, we explore the conditions to be satisfied for risk-acceptance criteria to be consistent with the axiomatization of standard expected utility theory within the network framework. In accordance with existing literature, we show that a risk evaluation criterion can be meaningful even if it is not consistent with the standard axiomatization of expected utility, once this is suitably reinterpreted in the light of networks. Finally, we provide some illustrative examples. - Highlights: • We discuss risk acceptance and theoretically develop this theme on the basis of network theory. • We propose an original framework for describing the algebraic structure of the set of the networks, when they are viewed as risks. • We introduce the risk measures on networks, which induce total orders on the set of networks. • We state conditions on the risk measures on networks to let the induced risk-acceptance criterion be consistent with a new formulation of the expected utility theory.

  5. Estimation of utility values from visual analog scale measures of health in patients undergoing cardiac surgery

    Directory of Open Access Journals (Sweden)

    Oddershede L

    2014-01-01

    Full Text Available Lars Oddershede,1,2 Jan Jesper Andreasen,1 Lars Ehlers2 1Department of Cardiothoracic Surgery, Center for Cardiovascular Research, Aalborg University Hospital, Aalborg, Denmark; 2Danish Center for Healthcare Improvements, Faculty of Social Sciences and Faculty of Health Sciences, Aalborg University, Aalborg East, Denmark Introduction: In health economic evaluations, mapping can be used to estimate utility values from other health outcomes in order to calculate quality adjusted life-years. Currently, no methods exist to map visual analog scale (VAS scores to utility values. This study aimed to develop and propose a statistical algorithm for mapping five dimensions of health, measured on VASs, to utility scores in patients suffering from cardiovascular disease. Methods: Patients undergoing coronary artery bypass grafting at Aalborg University Hospital in Denmark were asked to score their health using the five VAS items (mobility, self-care, ability to perform usual activities, pain, and presence of anxiety or depression and the EuroQol 5 Dimensions questionnaire. Regression analysis was used to estimate four mapping models from patients' age, sex, and the self-reported VAS scores. Prediction errors were compared between mapping models and on subsets of the observed utility scores. Agreement between predicted and observed values was assessed using Bland–Altman plots. Results: Random effects generalized least squares (GLS regression yielded the best results when quadratic terms of VAS scores were included. Mapping models fitted using the Tobit model and censored least absolute deviation regression did not appear superior to GLS regression. The mapping models were able to explain approximately 63%–65% of the variation in the observed utility scores. The mean absolute error of predictions increased as the observed utility values decreased. Conclusion: We concluded that it was possible to predict utility scores from VAS scores of the five

  6. Expected utility and catastrophic risk in a stochastic economy-climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ikefuji, M. [Institute of Social and Economic Research, Osaka University, Osaka (Japan); Laeven, R.J.A.; Magnus, J.R. [Department of Econometrics and Operations Research, Tilburg University, Tilburg (Netherlands); Muris, C. [CentER, Tilburg University, Tilburg (Netherlands)

    2010-11-15

    In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected utility with constant relative risk aversion (power utility); statisticians typically model economic catastrophes by probability distributions with heavy tails. Unfortunately, the expected utility framework is fragile with respect to heavy-tailed distributional assumptions. We specify a stochastic economy-climate model with power utility and explicitly demonstrate this fragility. We derive necessary and sufficient compatibility conditions on the utility function to avoid fragility and solve our stochastic economy-climate model for two examples of such compatible utility functions. We further develop and implement a procedure to learn the input parameters of our model and show that the model thus specified produces quite robust optimal policies. The numerical results indicate that higher levels of uncertainty (heavier tails) lead to less abatement and consumption, and to more investment, but this effect is not unlimited.

  7. Utilizing the non-bridge oxygen model to predict the glass viscosity

    International Nuclear Information System (INIS)

    Choi, Kwansik; Sheng, Jiawei; Maeng, Sung Jun; Song, Myung Jae

    1998-01-01

    Viscosity is the most important process property of waste glass. Viscosity measurement is difficult and costs much. Non-bridging Oxygen (NBO) model which relates glass composition to viscosity had been developed for high level waste at the Savannah River Site (SRS). This research utilized this NBO model to predict the viscosity of KEPRI's 55 glasses. It was found that there was a linear relationship between the measured viscosity and the predicted viscosity. The NBO model could be used to predict glass viscosity in glass formulation development. However the precision of predicted viscosity is out of satisfaction because the composition ranges are very different between the SRS and KEPRI glasses. The modification of NBO calculation, which included modification of alkaline earth elements and TiO 2 , could not strikingly improve the precision of predicted values

  8. An Examination of Organizatinal Performance Measurement System Utilization

    OpenAIRE

    DeBusk, Gerald Kenneth

    2003-01-01

    This dissertation provides results of three studies, which examine the utilization of organizational performance measurement systems. Evidence gathered in the first study provides insight into the number of perspectives or components found in the evaluation of an organization's performance and the relative weight placed on those components. The evidence suggests that the number of performance measurement components and their relative composition is situational. Components depend heavily on th...

  9. A Utility Model for Teaching Load Decisions in Academic Departments.

    Science.gov (United States)

    Massey, William F.; Zemsky, Robert

    1997-01-01

    Presents a utility model for academic department decision making and describes the structural specifications for analyzing it. The model confirms the class-size utility asymmetry predicted by the authors' academic rachet theory, but shows that marginal utility associated with college teaching loads is always negative. Curricular structure and…

  10. Extension of the behavioral model of healthcare utilization with ethnically diverse, low-income women.

    Science.gov (United States)

    Keenan, Lisa A; Marshall, Linda L; Eve, Susan

    2002-01-01

    Psychosocial vulnerabilities were added to a model of healthcare utilization. This extension was tested among low-income women with ethnicity addressed as a moderator. Structured interviews were conducted at 2 points in time, approximately 1 year apart. The constructs of psychosocial vulnerability, demographic predisposing, barriers, and illness were measured by multiple indicators to allow use of Structural Equation Modeling to analyze results. The models were tested separately for each ethnic group. Community office. African-American (N = 266), Euro-American (N = 200), and Mexican-American (N = 210) women were recruited from the Dallas Metropolitan area to participate in Project Health Outcomes of Women, a multi-year, multi-wave study. Face-to-face interviews were conducted with this sample. Participants had been in heterosexual relationships for at least 1 year, were between 20 and 49 years of age, and had incomes less than 200% of the national poverty level. Healthcare utilization, defined as physician visits and general healthcare visits. Illness mediated the effect of psychosocial vulnerability on healthcare utilization for African Americans and Euro-Americans. The model for Mexican Americans was the most complex. Psychosocial vulnerability on illness was partially mediated by barriers, which also directly affected utilization. Psychosocial vulnerabilities were significant utilization predictors for healthcare use for all low-income women in this study. The final models for the 2 minority groups, African Americans and Mexican Americans, were quite different. Hence, women of color should not be considered a homogeneous group in comparison to Euro-Americans.

  11. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  12. The utility of resilience as a conceptual framework for understanding and measuring LGBTQ health.

    Science.gov (United States)

    Colpitts, Emily; Gahagan, Jacqueline

    2016-04-06

    Historically, lesbian, gay, bisexual, transgender and queer (LGBTQ) health research has focused heavily on the risks for poor health outcomes, obscuring the ways in which LGBTQ populations maintain and improve their health across the life course. In this paper we argue that informing culturally competent health policy and systems requires shifting the LGBTQ health research evidence base away from deficit-focused approaches toward strengths-based approaches to understanding and measuring LGBTQ health. We recently conducted a scoping review with the aim of exploring strengths-based approaches to LGBTQ health research. Our team found that the concept of resilience emerged as a key conceptual framework. This paper discusses a subset of our scoping review findings on the utility of resilience as a conceptual framework in understanding and measuring LGBTQ health. The findings of our scoping review suggest that the ways in which resilience is defined and measured in relation to LGBTQ populations remains contested. Given that LGBTQ populations have unique lived experiences of adversity and discrimination, and may also have unique factors that contribute to their resilience, the utility of heteronormative and cis-normative models of resilience is questionable. Our findings suggest that there is a need to consider further exploration and development of LGBTQ-specific models and measures of resilience that take into account structural, social, and individual determinants of health and incorporate an intersectional lens. While we fully acknowledge that the resilience of LGBTQ populations is central to advancing LGBTQ health, there remains much work to be done before the concept of resilience can be truly useful in measuring LGBTQ health.

  13. Asset transformation and the challenges to servitize a utility business model

    International Nuclear Information System (INIS)

    Helms, Thorsten

    2016-01-01

    The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.

  14. Development of a Neutron Spectroscopic System Utilizing Compressed Sensing Measurements

    Directory of Open Access Journals (Sweden)

    Vargas Danilo

    2016-01-01

    Full Text Available A new approach to neutron detection capable of gathering spectroscopic information has been demonstrated. The approach relies on an asymmetrical arrangement of materials, geometry, and an ability to change the orientation of the detector with respect to the neutron field. Measurements are used to unfold the energy characteristics of the neutron field using a new theoretical framework of compressed sensing. Recent theoretical results show that the number of multiplexed samples can be lower than the full number of traditional samples while providing the ability to have some super-resolution. Furthermore, the solution approach does not require a priori information or inclusion of physics models. Utilizing the MCNP code, a number of candidate detector geometries and materials were modeled. Simulations were carried out for a number of neutron energies and distributions with preselected orientations for the detector. The resulting matrix (A consists of n rows associated with orientation and m columns associated with energy and distribution where n < m. The library of known responses is used for new measurements Y (n × 1 and the solver is able to determine the system, Y = Ax where x is a sparse vector. Therefore, energy spectrum measurements are a combination of the energy distribution information of the identified elements of A. This approach allows for determination of neutron spectroscopic information using a single detector system with analog multiplexing. The analog multiplexing allows the use of a compressed sensing solution similar to approaches used in other areas of imaging. A single detector assembly provides improved flexibility and is expected to reduce uncertainty associated with current neutron spectroscopy measurement.

  15. Recent advances in modeling nutrient utilization in ruminants.

    Science.gov (United States)

    Kebreab, E; Dijkstra, J; Bannink, A; France, J

    2009-04-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, postabsorptive metabolism, and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient utilization and excretion with respect to environmental issues. Accurate prediction of fermentation stoichiometry has an impact on estimating the type of energy-yielding substrate available to the animal, and the ratio of lipogenic to glucogenic VFA is an important determinant of methanogenesis. Recent advances in modeling VFA stoichiometry offer ways for dietary manipulation to shift the fermentation in favor of glucogenic VFA. Increasing energy to the animal by supplementing with starch can lead to health problems such as subacute rumen acidosis caused by rumen pH depression. Mathematical models have been developed to describe changes in rumen pH and rumen fermentation. Models that relate rumen temperature to rumen pH have also been developed and have the potential to aid in the diagnosis of subacute rumen acidosis. The effect of pH has been studied mechanistically, and in such models, fractional passage rate has a large impact on substrate degradation and microbial efficiency in the rumen and should be an important theme in future studies. The efficiency with which energy is utilized by ruminants has been updated in recent studies. Mechanistic models of N utilization indicate that reducing dietary protein concentration, matching protein degradability to the microbial requirement, and increasing the energy status of the animal will reduce the output of N as waste. Recent mechanistic P models calculate the P requirement by taking into account P recycled through saliva and endogenous losses. Mechanistic P models suggest reducing current P amounts for lactating dairy cattle to at least 0.35% P in the diet, with a potential reduction of up to 1.3 kt/yr. A model that

  16. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  17. A generalized measurement model to quantify health: the multi-attribute preference response model.

    Science.gov (United States)

    Krabbe, Paul F M

    2013-01-01

    After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model) and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR) model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.

  18. mathematical models for estimating radio channels utilization

    African Journals Online (AJOL)

    2017-08-08

    Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.

  19. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    Science.gov (United States)

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  20. Awareness and utilization of abattoir safety measures in Katsina ...

    African Journals Online (AJOL)

    The study assessed utilization of abattoir safety measures in Katsina South and Central senatorial districts, Nigeria. Information was obtained from a total of 80 abattoir workers in each district, while frequency counts, percentages and independent sample t-test were used to analyze data. The majority, in the respective ...

  1. Evaluation model of wind energy resources and utilization efficiency of wind farm

    Science.gov (United States)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  2. Model measurements in the cryogenic National Transonic Facility - An overview

    Science.gov (United States)

    Holmes, H. K.

    1985-01-01

    In the operation of the National Transonic Facility (NTF) higher Reynolds numbers are obtained on the basis of a utilization of low operational temperatures and high pressures. Liquid nitrogen is used as cryogenic medium, and temperatures in the range from -320 F to 160 F can be employed. A maximum pressure of 130 psi is specified, while the NTF design parameter for the Reynolds number is 120,000,000. In view of the new requirements regarding the measurement systems, major developments had to be undertaken in virtually all wind tunnel measurement areas and, in addition, some new measurement systems were needed. Attention is given to force measurement, pressure measurement, model attitude, model deformation, and the data system.

  3. A decision modeling for phasor measurement unit location selection in smart grid systems

    Science.gov (United States)

    Lee, Seung Yup

    As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.

  4. Neutron flux measurement utilizing Campbell technique

    International Nuclear Information System (INIS)

    Kropik, M.

    2000-01-01

    Application of the Campbell technique for the neutron flux measurement is described in the contribution. This technique utilizes the AC component (noise) of a neutron chamber signal rather than a usually used DC component. The Campbell theorem, originally discovered to describe noise behaviour of valves, explains that the root mean square of the AC component of the chamber signal is proportional to the neutron flux (reactor power). The quadratic dependence of the reactor power on the root mean square value usually permits to accomplish the whole current power range of the neutron flux measurement by only one channel. Further advantage of the Campbell technique is that large pulses of the response to neutrons are favoured over small pulses of the response to gamma rays in the ratio of their mean square charge transfer and thus, the Campbell technique provides an excellent gamma rays discrimination in the current operational range of a neutron chamber. The neutron flux measurement channel using state of the art components was designed and put into operation. Its linearity, accuracy, dynamic range, time response and gamma discrimination were tested on the VR-1 nuclear reactor in Prague, and behaviour under high neutron flux (accident conditions) was tested on the TRIGA nuclear reactor in Vienna. (author)

  5. Modeling Substrate Utilization, Metabolite Production, and Uranium Immobilization in Shewanella oneidensis Biofilms

    Directory of Open Access Journals (Sweden)

    Ryan S. Renslow

    2017-06-01

    Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental

  6. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of an Inflatable Module

    Science.gov (United States)

    Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio

    2012-01-01

    This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.

  7. Realized Beta GARCH: A Multivariate GARCH Model with Realized Measures of Volatility and CoVolatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Voev, Valeri

    We introduce a multivariate GARCH model that utilizes and models realized measures of volatility and covolatility. The realized measures extract information contained in high-frequency data that is particularly beneficial during periods with variation in volatility and covolatility. Applying the ...

  8. The Health Utilities Index (HUI®: concepts, measurement properties and applications

    Directory of Open Access Journals (Sweden)

    Horsman John

    2003-10-01

    Full Text Available Abstract This is a review of the Health Utilities Index (HUI® multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2 and HUI Mark 3 (HUI3 instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels.

  9. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  10. Aerial Measuring System Sensor Modeling

    International Nuclear Information System (INIS)

    Detwiler, R.S.

    2002-01-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimating detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 microCi/m 2 . The helicopter calculations modeled the transport of americium-241 ( 241 Am) as this is

  11. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A catastrophe model for the prospect-utility theory question.

    Science.gov (United States)

    Oliva, Terence A; McDade, Sean R

    2008-07-01

    Anomalies have played a big part in the analysis of decision making under risk. Both expected utility and prospect theories were born out of anomalies exhibited by actual decision making behavior. Since the same individual can use both expected utility and prospect approaches at different times, it seems there should be a means of uniting the two. This paper turns to nonlinear dynamical systems (NDS), specifically a catastrophe model, to help suggest an 'out of the box' line of solution toward integration. We use a cusp model to create a value surface whose control dimensions are involvement and gains versus losses. By including 'involvement' as a variable the importance of the individual's psychological state is included, and it provides a rationale for how decision makers' changes from expected utility to prospect might occur. Additionally, it provides a possible explanation for what appears to be even more irrational decisions that individuals make when highly emotionally involved. We estimate the catastrophe model using a sample of 997 gamblers who attended a casino and compare it to the linear model using regression. Hence, we have actual data from individuals making real bets, under real conditions.

  13. Measuring Health Utilities in Children and Adolescents: A Systematic Review of the Literature.

    Directory of Open Access Journals (Sweden)

    Dominic Thorrington

    Full Text Available The objective of this review was to evaluate the use of all direct and indirect methods used to estimate health utilities in both children and adolescents. Utilities measured pre- and post-intervention are combined with the time over which health states are experienced to calculate quality-adjusted life years (QALYs. Cost-utility analyses (CUAs estimate the cost-effectiveness of health technologies based on their costs and benefits using QALYs as a measure of benefit. The accurate measurement of QALYs is dependent on using appropriate methods to elicit health utilities.We sought studies that measured health utilities directly from patients or their proxies. We did not exclude those studies that also included adults in the analysis, but excluded those studies focused only on adults.We evaluated 90 studies from a total of 1,780 selected from the databases. 47 (52% studies were CUAs incorporated into randomised clinical trials; 23 (26% were health-state utility assessments; 8 (9% validated methods and 12 (13% compared existing or new methods. 22 unique direct or indirect calculation methods were used a total of 137 times. Direct calculation through standard gamble, time trade-off and visual analogue scale was used 32 times. The EuroQol EQ-5D was the most frequently-used single method, selected for 41 studies. 15 of the methods used were generic methods and the remaining 7 were disease-specific. 48 of the 90 studies (53% used some form of proxy, with 26 (29% using proxies exclusively to estimate health utilities.Several child- and adolescent-specific methods are still being developed and validated, leaving many studies using methods that have not been designed or validated for use in children or adolescents. Several studies failed to justify using proxy respondents rather than administering the methods directly to the patients. Only two studies examined missing responses to the methods administered with respect to the patients' ages.

  14. Subjective Expected Utility: A Model of Decision-Making.

    Science.gov (United States)

    Fischoff, Baruch; And Others

    1981-01-01

    Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)

  15. Key data elements for use in cost-utility modeling of biological treatments for rheumatoid arthritis.

    Science.gov (United States)

    Ganz, Michael L; Hansen, Brian Bekker; Valencia, Xavier; Strandberg-Larsen, Martin

    2015-05-01

    Economic evaluation is becoming more common and important as new biologic therapies for rheumatoid arthritis (RA) are developed. While much has been published about how to design cost-utility models for RA to conduct these evaluations, less has been written about the sources of data populating those models. The goal is to review the literature and to provide recommendations for future data collection efforts. This study reviewed RA cost-utility models published between January 2006 and February 2014 focusing on five key sources of data (health-related quality-of-life and utility, clinical outcomes, disease progression, course of treatment, and healthcare resource use and costs). It provided recommendations for collecting the appropriate data during clinical and other studies to support modeling of biologic treatments for RA. Twenty-four publications met the selection criteria. Almost all used two steps to convert clinical outcomes data to utilities rather than more direct methods; most did not use clinical outcomes measures that captured absolute levels of disease activity and physical functioning; one-third of them, in contrast with clinical reality, assumed zero disease progression for biologic-treated patients; little more than half evaluated courses of treatment reflecting guideline-based or actual clinical care; and healthcare resource use and cost data were often incomplete. Based on these findings, it is recommended that future studies collect clinical outcomes and health-related quality-of-life data using appropriate instruments that can convert directly to utilities; collect data on actual disease progression; be designed to capture real-world courses of treatment; and collect detailed data on a wide range of healthcare resources and costs.

  16. Utilizing Operational and Improved Remote Sensing Measurements to Assess Air Quality Monitoring Model Forecasts

    Science.gov (United States)

    Gan, Chuen-Meei

    Air quality model forecasts from Weather Research and Forecast (WRF) and Community Multiscale Air Quality (CMAQ) are often used to support air quality applications such as regulatory issues and scientific inquiries on atmospheric science processes. In urban environments, these models become more complex due to the inherent complexity of the land surface coupling and the enhanced pollutants emissions. This makes it very difficult to diagnose the model, if the surface parameter forecasts such as PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microm) are not accurate. For this reason, getting accurate boundary layer dynamic forecasts is as essential as quantifying realistic pollutants emissions. In this thesis, we explore the usefulness of vertical sounding measurements on assessing meteorological and air quality forecast models. In particular, we focus on assessing the WRF model (12km x 12km) coupled with the CMAQ model for the urban New York City (NYC) area using multiple vertical profiling and column integrated remote sensing measurements. This assessment is helpful in probing the root causes for WRF-CMAQ overestimates of surface PM2.5 occurring both predawn and post-sunset in the NYC area during the summer. In particular, we find that the significant underestimates in the WRF PBL height forecast is a key factor in explaining this anomaly. On the other hand, the model predictions of the PBL height during daytime when convective heating dominates were found to be highly correlated to lidar derived PBL height with minimal bias. Additional topics covered in this thesis include mathematical method using direct Mie scattering approach to convert aerosol microphysical properties from CMAQ into optical parameters making direct comparisons with lidar and multispectral radiometers feasible. Finally, we explore some tentative ideas on combining visible (VIS) and mid-infrared (MIR) sensors to better separate aerosols into fine and coarse modes.

  17. The value of soil respiration measurements for interpreting and modeling terrestrial carbon cycling

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Claire L.; Bond-Lamberty, Ben; Desai, Ankur R.; Lavoie, Martin; Risk, Dave; Tang, Jianwu; Todd-Brown, Katherine; Vargas, Rodrigo

    2016-11-16

    A recent acceleration of model-data synthesis activities has leveraged many terrestrial carbon (C) datasets, but utilization of soil respiration (RS) data has not kept pace with other types such as eddy covariance (EC) fluxes and soil C stocks. Here we argue that RS data, including non-continuous measurements from survey sampling campaigns, have unrealized value and should be utilized more extensively and creatively in data synthesis and modeling activities. We identify three major challenges in interpreting RS data, and discuss opportunities to address them. The first challenge is that when RS is compared to ecosystem respiration (RECO) measured from EC towers, it is not uncommon to find substantial mismatch, indicating one or both flux methodologies are unreliable. We argue the most likely cause of mismatch is unreliable EC data, and there is an unrecognized opportunity to utilize RS for EC quality control. The second challenge is that RS integrates belowground heterotrophic (RH) and autotrophic (RA) activity, whereas modelers generally prefer partitioned fluxes, and few models include an explicit RS output. Opportunities exist to use the total RS flux for data assimilation and model benchmarking methods rather than less-certain partitioned fluxes. Pushing for more experiments that not only partition RS but also monitor the age of RA and RH, as well as for the development of belowground RA components in models, would allow for more direct comparison between measured and modeled values. The third challenge is that soil respiration is generally measured at a very different resolution than that needed for comparison to EC or ecosystem- to global-scale models. Measuring soil fluxes with finer spatial resolution and more extensive coverage, and downscaling EC fluxes to match the scale of RS, will improve chamber and tower comparisons. Opportunities also exist to estimate RH at regional scales by implementing decomposition functional types, akin to plant functional

  18. Awareness of Occupational Injuries and Utilization of Safety Measures among Welders in Coastal South India

    Directory of Open Access Journals (Sweden)

    S Ganesh Kumar

    2013-10-01

    Full Text Available Background: Awareness of occupational hazards and its safety precautions among welders is an important health issue, especially in developing countries. Objective: To assess the awareness of occupational hazards and utilization of safety measures among welders in coastal South India. Methods: A cross-sectional study was conducted among 209 welders in Puducherry, South India. Baseline characteristics, awareness of health hazards, safety measures and their availability to and utilization by the participants were assessed using a pre-tested structured questionnaire. Results: The majority of studied welders aged between 20 and 40 years (n=160, 76.6% and had 1-10 years of education (n=181, 86.6%. They were more aware of hazards (n=174, 83.3% than safety measures (n=134, 64.1%. The majority of studied welders utilized at least one protective measure in the preceding week (n=200, 95.7%. Many of them had more than 5 years of experience (n=175, 83.7%, however, only 20% of them had institutional training (n=40, 19.1%. Age group, education level, and utilization of safety measures were significantly associated with awareness of hazards in univariate analysis (p<0.05. Conclusion: Awareness of occupational hazards and utilization of safety measures is low among welders in coastal South India, which highlights the importance of strengthening safety regulatory services towards this group of workers.

  19. Assessing the empirical validity of alternative multi-attribute utility measures in the maternity context

    Directory of Open Access Journals (Sweden)

    Morrell Jane

    2009-05-01

    Full Text Available Abstract Background Multi-attribute utility measures are preference-based health-related quality of life measures that have been developed to inform economic evaluations of health care interventions. The objective of this study was to compare the empirical validity of two multi-attribute utility measures (EQ-5D and SF-6D based on hypothetical preferences in a large maternity population in England. Methods Women who participated in a randomised controlled trial of additional postnatal support provided by trained community support workers represented the study population for this investigation. The women were asked to complete the EQ-5D descriptive system (which defines health-related quality of life in terms of five dimensions: mobility, self care, usual activities, pain/discomfort and anxiety/depression and the SF-36 (which defines health-related quality of life, using 36 items, across eight dimensions: physical functioning, role limitations (physical, social functioning, bodily pain, general health, mental health, vitality and role limitations (emotional at six months postpartum. Their responses were converted into utility scores using the York A1 tariff set and the SF-6D utility algorithm, respectively. One-way analysis of variance was used to test the hypothetically-constructed preference rule that each set of utility scores differs significantly by self-reported health status (categorised as excellent, very good, good, fair or poor. The degree to which EQ-5D and SF-6D utility scores reflected alternative dichotomous configurations of self-reported health status and the Edinburgh Postnatal Depression Scale score was tested using the relative efficiency statistic and receiver operating characteristic (ROC curves. Results The mean utility score for the EQ-5D was 0.861 (95% CI: 0.844, 0.877, whilst the mean utility score for the SF-6D was 0.809 (95% CI: 0.796, 0.822, representing a mean difference in utility score of 0.052 (95% CI: 0.040, 0

  20. The elastic body model: a pedagogical approach integrating real time measurements and modelling activities

    International Nuclear Information System (INIS)

    Fazio, C; Guastella, I; Tarantino, G

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed

  1. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  2. Expected utility without utility

    OpenAIRE

    Castagnoli, E.; Licalzi, M.

    1996-01-01

    This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....

  3. Utility values associated with advanced or metastatic non-small cell lung cancer: data needs for economic modeling.

    Science.gov (United States)

    Brown, Jacqueline; Cook, Keziah; Adamski, Kelly; Lau, Jocelyn; Bargo, Danielle; Breen, Sarah; Chawla, Anita

    2017-04-01

    Cost-effectiveness analyses often inform healthcare reimbursement decisions. The preferred measure of effectiveness is the quality adjusted life year (QALY) gained, where the quality of life adjustment is measured in terms of utility. Areas covered: We assessed the availability and variation of utility values for health states associated with advanced or metastatic non-small cell lung cancer (NSCLC) to identify values appropriate for cost-effectiveness models assessing alternative treatments. Our systematic search of six electronic databases (January 2000 to August 2015) found the current literature to be sparse in terms of utility values associated with NSCLC, identifying 27 studies. Utility values were most frequently reported over time and by treatment type, and less frequently by disease response, stage of disease, adverse events or disease comorbidities. Expert commentary: In response to rising healthcare costs, payers increasingly consider the cost-effectiveness of novel treatments in reimbursement decisions, especially in oncology. As the number of therapies available to treat NSCLC increases, cost-effectiveness analyses will play a key role in reimbursement decisions in this area. Quantifying the relationship between health and quality of life for NSCLC patients via utility values is an important component of assessing the cost effectiveness of novel treatments.

  4. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    Science.gov (United States)

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  5. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  6. Sustainable geothermal utilization - Case histories; definitions; research issues and modelling

    International Nuclear Information System (INIS)

    Axelsson, Gudni

    2010-01-01

    Sustainable development by definition meets the needs of the present without compromising the ability of future generations to meet their own needs. The Earth's enormous geothermal resources have the potential to contribute significantly to sustainable energy use worldwide as well as to help mitigate climate change. Experience from the use of numerous geothermal systems worldwide lasting several decades demonstrates that by maintaining production below a certain limit the systems reach a balance between net energy discharge and recharge that may be maintained for a long time (100-300 years). Modelling studies indicate that the effect of heavy utilization is often reversible on a time-scale comparable to the period of utilization. Thus, geothermal resources can be used in a sustainable manner either through (1) constant production below the sustainable limit, (2) step-wise increase in production, (3) intermittent excessive production with breaks, and (4) reduced production after a shorter period of heavy production. The long production histories that are available for low-temperature as well as high-temperature geothermal systems distributed throughout the world, provide the most valuable data available for studying sustainable management of geothermal resources, and reservoir modelling is the most powerful tool available for this purpose. The paper presents sustainability modelling studies for the Hamar and Nesjavellir geothermal systems in Iceland, the Beijing Urban system in China and the Olkaria system in Kenya as examples. Several relevant research issues have also been identified, such as the relevance of system boundary conditions during long-term utilization, how far reaching interference from utilization is, how effectively geothermal systems recover after heavy utilization and the reliability of long-term (more than 100 years) model predictions. (author)

  7. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  8. Target-oriented utility theory for modeling the deterrent effects of counterterrorism

    International Nuclear Information System (INIS)

    Bier, Vicki M.; Kosanoglu, Fuat

    2015-01-01

    Optimal resource allocation in security has been a significant challenge for critical infrastructure protection. Numerous studies use game theory as the method of choice, because of the fact that an attacker can often observe the defender’s investment in security and adapt his choice of strategies accordingly. However, most of these models do not explicitly consider deterrence, with the result that they may lead to wasted resources if less investment would be sufficient to deter an attack. In this paper, we assume that the defender is uncertain about the level of defensive investment that would deter an attack, and use the target-oriented utility to optimize the level of defensive investment, taking into account the probability of deterrence. - Highlights: • We propose a target-oriented utility model for attacker deterrence. • We model attack deterrence as a function of attacker success probability. • We compare target-oriented utility model and conventional game-theoretical model. • Results show that our model results better value of the defender’s objective function. • Results support that defending series systems is more difficult than parallel systems

  9. Local cerebral glucose utilization in the beagle puppy model of intraventricular hemorrhage

    International Nuclear Information System (INIS)

    Ment, L.R.; Stewart, W.B.; Duncan, C.C.

    1982-01-01

    Local cerebral glucose utilization has been measured by means of carbon-14( 14 C)-autoradiography with 2-deoxyglucose in the newborn beagle puppy model of intraventricular hemorrhage. Our studies demonstrate gray matter/white matter differentiation of uptake of 14 C-2-deoxyglucose in the control pups, as would be expected from adult animal studies. However, there is a marked homogeneity of 14 C-2-deoxyglucose uptake in all brain regions in the puppies with intraventricular hemorrhage, possibly indicating a loss of the known coupling between cerebral blood flow and metabolism in this neuropathological condition

  10. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    Science.gov (United States)

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer

    Directory of Open Access Journals (Sweden)

    Hao Y

    2016-06-01

    Full Text Available Yanni Hao,1 Verena Wolfram,2 Jennifer Cook2 1Novartis Pharmaceuticals, East Hanover, NJ, USA; 2Adelphi Values, Bollington, UK Background: Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Methods: Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases, online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center, and web sites of health technology assessment (HTA bodies. Publications were selected based on the search strategy and the overall study objectives. Results: A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D. Other utility measures, such as the direct methods standard gamble (SG, time trade-off (TTO, and visual analog scale (VAS, were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Core 30 (EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Breast Cancer 23 (EORTC QLQ-BR23, Functional Assessment of Cancer Therapy – General questionnaire (FACT-G, and Utility-Based Questionnaire-Cancer (UBQ-C; most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex

  12. Context analysis for a new regulatory model for electric utilities in Brazil

    International Nuclear Information System (INIS)

    El Hage, Fabio S.; Rufín, Carlos

    2016-01-01

    This article examines what would have to change in the Brazilian regulatory framework in order to make utilities profit from energy efficiency and the integration of resources, instead of doing so from traditional consumption growth, as it happens at present. We argue that the Brazilian integrated electric sector resembles a common-pool resources problem, and as such it should incorporate, in addition to the centralized operation for power dispatch already in place, demand side management, behavioral strategies, and smart grids, attained through a new business and regulatory model for utilities. The paper proposes several measures to attain a more sustainable and productive electricity distribution industry: decoupling revenues from volumetric sales through a fixed maximum load fee, which would completely offset current disincentives for energy efficiency; the creation of a market for negawatts (saved megawatts) using the current Brazilian mechanism of public auctions for the acquisition of wholesale energy; and the integration of technologies, especially through the growth of unregulated products and services. Through these measures, we believe that Brazil could improve both energy security and overall sustainability of its power sector in the long run. - Highlights: • Necessary changes in the Brazilian regulatory framework towards energy efficiency. • How to incorporate demand side management, behavioral strategies, and smart grids. • Proposition of a market for negawatts at public auctions. • Measures to attain a more sustainable electricity distribution industry in Brazil.

  13. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...

  14. UTILIZATION OF MULTIPLE MEASUREMENTS FOR GLOBAL THREE-DIMENSIONAL MAGNETOHYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Wang, A. H.; Wu, S. T.; Tandberg-Hanssen, E.; Hill, Frank

    2011-01-01

    Magnetic field measurements, line of sight (LOS) and/or vector magnetograms, have been used in a variety of solar physics studies. Currently, the global transverse velocity measurements near the photosphere from the Global Oscillation Network Group (GONG) are available. We have utilized these multiple observational data, for the first time, to present a data-driven global three-dimensional and resistive magnetohydrodynamic (MHD) simulation, and to investigate the energy transport across the photosphere to the corona. The measurements of the LOS magnetic field and transverse velocity reflect the effects of convective zone dynamics and provide information from the sub-photosphere to the corona. In order to self-consistently include the observables on the lower boundary as the inputs to drive the model, a set of time-dependent boundary conditions is derived by using the method of characteristics. We selected GONG's global transverse velocity measurements of synoptic chart CR2009 near the photosphere and SOLIS full-resolution LOS magnetic field maps of synoptic chart CR2009 on the photosphere to simulate the equilibrium state and compute the energy transport across the photosphere. To show the advantage of using both observed magnetic field and transverse velocity data, we have studied two cases: (1) with the inputs of the LOS magnetic field and transverse velocity measurements, and (2) with the input of the LOS magnetic field and without the input of transverse velocity measurements. For these two cases, the simulation results presented here are a three-dimensional coronal magnetic field configuration, density distributions on the photosphere and at 1.5 solar radii, and the solar wind in the corona. The deduced physical characteristics are the total current helicity and the synthetic emission. By comparing all the physical parameters of case 1 and case 2 and their synthetic emission images with the EIT image, we find that using both the measured magnetic field and the

  15. Coupling model of energy consumption with changes in environmental utility

    International Nuclear Information System (INIS)

    He Hongming; Jim, C.Y.

    2012-01-01

    This study explores the relationships between metropolis energy consumption and environmental utility changes by a proposed Environmental Utility of Energy Consumption (EUEC) model. Based on the dynamic equilibrium of input–output economics theory, it considers three simulation scenarios: fixed-technology, technological-innovation, and green-building effect. It is applied to analyse Hong Kong in 1980–2007. Continual increase in energy consumption with rapid economic growth degraded environmental utility. First, energy consumption at fixed-technology was determined by economic outcome. In 1990, it reached a critical balanced state when energy consumption was 22×10 9 kWh. Before 1990 (x 1 9 kWh), rise in energy consumption improved both economic development and environmental utility. After 1990 (x 1 >22×10 9 kWh), expansion of energy consumption facilitated socio-economic development but suppressed environmental benefits. Second, technological-innovation strongly influenced energy demand and improved environmental benefits. The balanced state remained in 1999 when energy consumption reached 32.33×10 9 kWh. Technological-innovation dampened energy consumption by 12.99%, exceeding the fixed-technology condition. Finally, green buildings reduced energy consumption by an average of 17.5% in 1990–2007. They contributed significantly to energy saving, and buffered temperature fluctuations between external and internal environment. The case investigations verified the efficiency of the EUEC model, which can effectively evaluate the interplay of energy consumption and environmental quality. - Highlights: ► We explore relationships between metropolis energy consumption and environmental utility. ► An Environmental Utility of Energy Consumption (EUEC) model is proposed. ► Technological innovation mitigates energy consumption impacts on environmental quality. ► Technological innovation decreases demand of energy consumption more than fixed technology scenario

  16. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  17. Burnup verification measurements at a US nuclear utility using the FORK measurement system

    International Nuclear Information System (INIS)

    Ewing, R.I.; Bosler, G.E.; Walden, G.

    1993-01-01

    The FORK measurement system, designed at Los Alamos National Laboratory (LANL) for the International Atomic Energy Agency (IAEA) safeguards program, has been used to examine spent reactor fuel assemblies at Duke Power Company's Oconee Nuclear Station. The FORK system measures the passive neutron and gamma-ray emission from spent fuel assemblies while in the storage pool. These measurements can be correlated with burnup and cooling time, and can be used to verify the reactor site records. Verification measurements may be used to help ensure nuclear criticality safety when burnup credit is applied to spent fuel transport and storage systems. By taking into account the reduced reactivity of spent fuel due to its burnup in the reactor, burnup credit results in more efficient and economic transport and storage. The objectives of these tests are to demonstrate the applicability of the FORK system to verify reactor records and to develop optimal procedures compatible with utility operations. The test program is a cooperative effort supported by Sandia National Laboratories, the Electric Power Research Institute (EPRI), Los Alamos National Laboratory, and the Duke Power Company

  18. Quality measurement affecting surgical practice: Utility versus utopia.

    Science.gov (United States)

    Henry, Leonard R; von Holzen, Urs W; Minarich, Michael J; Hardy, Ashley N; Beachy, Wilbur A; Franger, M Susan; Schwarz, Roderich E

    2018-03-01

    The Triple Aim: improving healthcare quality, cost and patient experience has resulted in massive healthcare "quality" measurement. For many surgeons the origins, intent and strengths of this measurement barrage seems nebulous-though their shortcomings are noticeable. This article reviews the major organizations and programs (namely the Centers for Medicare and Medicaid Services) driving the somewhat burdensome healthcare quality climate. The success of this top-down approach is mixed, and far from convincing. We contend that the current programs disproportionately reflect the definitions of quality from (and the interests of) the national payer perspective; rather than a more balanced representation of all stakeholders interests-most importantly, patients' beneficence. The result is an environment more like performance management than one of valid quality assessment. Suggestions for a more meaningful construction of surgical quality measurement are offered, as well as a strategy to describe surgical quality from all of the stakeholders' perspectives. Our hope is to entice surgeons to engage in institution level quality improvement initiatives that promise utility and are less utopian than what is currently present. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    Science.gov (United States)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  20. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  1. DIAMOND: A model of incremental decision making for resource acquisition by electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.; Hirst, E.; Yourstone, E.

    1991-02-01

    Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.

  2. What’s Needed from Climate Modeling to Advance Actionable Science for Water Utilities?

    Science.gov (United States)

    Barsugli, J. J.; Anderson, C. J.; Smith, J. B.; Vogel, J. M.

    2009-12-01

    “…perfect information on climate change is neither available today nor likely to be available in the future, but … over time, as the threats climate change poses to our systems grow more real, predicting those effects with greater certainty is non-discretionary. We’re not yet at a level at which climate change projections can drive climate change adaptation.” (Testimony of WUCA Staff Chair David Behar to the House Committee on Science and Technology, May 5, 2009) To respond to this challenge, the Water Utility Climate Alliance (WUCA) has sponsored a white paper titled “Options for Improving Climate Modeling to Assist Water Utility Planning for Climate Change. ” This report concerns how investments in the science of climate change, and in particular climate modeling and downscaling, can best be directed to help make climate projections more actionable. The meaning of “model improvement” can be very different depending on whether one is talking to a climate model developer or to a water manager trying to incorporate climate projections in to planning. We first surveyed the WUCA members on present and potential uses of climate model projections and on climate inputs to their various system models. Based on those surveys and on subsequent discussions, we identified four dimensions along which improvement in modeling would make the science more “actionable”: improved model agreement on change in key parameters; narrowing the range of model projections; providing projections at spatial and temporal scales that match water utilities system models; providing projections that water utility planning horizons. With these goals in mind we developed four options for improving global-scale climate modeling and three options for improving downscaling that will be discussed. However, there does not seem to be a single investment - the proverbial “magic bullet” -- which will substantially reduce the range of model projections at the scales at which utility

  3. Utility of Social Modeling for Proliferation Assessment - Preliminary Findings

    International Nuclear Information System (INIS)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-01-01

    Often the methodologies for assessing proliferation risk are focused around the inherent vulnerability of nuclear energy systems and associated safeguards. For example an accepted approach involves ways to measure the intrinsic and extrinsic barriers to potential proliferation. This paper describes preliminary investigation into non-traditional use of social and cultural information to improve proliferation assessment and advance the approach to assessing nuclear material diversion. Proliferation resistance assessment, safeguard assessments and related studies typically create technical information about the vulnerability of a nuclear energy system to diversion of nuclear material. The purpose of this research project is to find ways to integrate social information with technical information by explicitly considering the role of culture, groups and/or individuals to factors that impact the possibility of proliferation. When final, this work is expected to describe and demonstrate the utility of social science modeling in proliferation and proliferation risk assessments.

  4. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-08-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive facets including conceptual and procedural elements. In the first part of the study, data were collected from 1,483 students attending eight secondary schools in England, through the use of a newly devised Secondary Self-Concept Science Instrument, and structural equation modeling was employed to test and validate a model. In the second part of the study, the data were analysed within the new self-concept framework to examine learners' ASC profiles across the domains of science, with particular attention paid to age- and gender-related differences. The study found that the proposed science self-concept model exhibited robust measures of fit and construct validity, which were shown to be invariant across gender and age subgroups. The self-concept profiles were heterogeneous in nature with the component relating to self-concept in physics, being surprisingly positive in comparison to other aspects of science. This outcome is in stark contrast to data reported elsewhere and raises important issues about the nature of young learners' self-conceptions about science. The paper concludes with an analysis of the potential utility of the self-concept measurement instrument as a pedagogical device for science educators and learners of science.

  5. Examining the Relationship between Technological Pedagogical Content Knowledge (TPACK) and Student Achievement Utilizing the Florida Value-Added Model

    Science.gov (United States)

    Farrell, Ivan K.; Hamed, Kastro M.

    2017-01-01

    Utilizing a correlational research design, we sought to examine the relationship between the technological pedagogical content knowledge (TPACK) of in-service teachers and student achievement measured with each individual teacher's Value-Added Model (VAM) score. The TPACK survey results and a teacher's VAM score were also examined, separately,…

  6. The changing utility workforce and the emergence of building information modeling in utilities

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, A. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.

  7. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    Science.gov (United States)

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  8. Indirect Measurement of Energy Density of Soft PZT Ceramic Utilizing Mechanical Stress

    Science.gov (United States)

    Unruan, Muangjai; Unruan, Sujitra; Inkong, Yutthapong; Yimnirun, Rattikorn

    2017-11-01

    This paper reports on an indirect measurement of energy density of soft PZT ceramic utilizing mechanical stress. The method works analogous to the Olsen cycle and allows for a large amount of electro-mechanical energy conversion. A maximum energy density of 350 kJ/m3/cycle was found under 0-312 MPa and 1-20 kV/cm of applied mechanical stress and electric field, respectively. The obtained result is substantially higher than the results reported in previous studies of PZT materials utilizing a direct piezoelectric effect.

  9. Review of utility values for economic modeling in type 2 diabetes.

    Science.gov (United States)

    Beaudet, Amélie; Clegg, John; Thuresson, Per-Olof; Lloyd, Adam; McEwan, Phil

    2014-06-01

    Economic analysis in type 2 diabetes mellitus (T2DM) requires an assessment of the effect of a wide range of complications. The objective of this article was to identify a set of utility values consistent with the National Institute for Health and Care Excellence (NICE) reference case and to critically discuss and illustrate challenges in creating such a utility set. A systematic literature review was conducted to identify studies reporting utility values for relevant complications. The methodology of each study was assessed for consistency with the NICE reference case. A suggested set of utility values applicable to modeling was derived, giving preference to studies reporting multiple complications and correcting for comorbidity. The review considered 21 relevant diabetes complications. A total of 16,574 articles were identified; after screening, 61 articles were assessed for methodological quality. Nineteen articles met NICE criteria, reporting utility values for 20 of 21 relevant complications. For renal transplant, because no articles meeting NICE criteria were identified, two articles using other methodologies were included. Index value estimates for T2DM without complication ranged from 0.711 to 0.940. Utility decrement associated with complications ranged from 0.014 (minor hypoglycemia) to 0.28 (amputation). Limitations associated with the selection of a utility value for use in economic modeling included variability in patient recruitment, heterogeneity in statistical analysis, large variability around some point estimates, and lack of recent data. A reference set of utility values for T2DM and its complications in line with NICE requirements was identified. This research illustrates the challenges associated with systematically selecting utility data for economic evaluations. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions

    Science.gov (United States)

    Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong

    2017-01-01

    Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…

  11. Comparison of Echo 7 field line length measurements to magnetospheric model predictions

    International Nuclear Information System (INIS)

    Nemzek, R.J.; Winckler, J.R.; Malcolm, P.R.

    1992-01-01

    The Echo 7 sounding rocket experiment injected electron beams on central tail field lines near L = 6.5. Numerous injections returned to the payload as conjugate echoes after mirroring in the southern hemisphere. The authors compare field line lengths calculated from measured conjugate echo bounce times and energies to predictions made by integrating electron trajectories through various magnetospheric models: the Olson-Pfitzer Quiet and Dynamic models and the Tsyganenko-Usmanov model. Although Kp at launch was 3-, quiet time magnetic models est fit the echo measurements. Geosynchronous satellite magnetometer measurements near the Echo 7 field lies during the flight were best modeled by the Olson-Pfitzer Dynamic Model and the Tsyganenko-Usmanov model for Kp = 3. The discrepancy between the models that best fit the Echo 7 data and those that fit the satellite data was most likely due to uncertainties in the small-scale configuration of the magnetospheric models. The field line length measured by the conjugate echoes showed some temporal variation in the magnetic field, also indicated by the satellite magnetometers. This demonstrates the utility an Echo-style experiment could have in substorm studies

  12. A mangrove creek restoration plan utilizing hydraulic modeling.

    Science.gov (United States)

    Marois, Darryl E; Mitsch, William J

    2017-11-01

    Despite the valuable ecosystem services provided by mangrove ecosystems they remain threatened around the globe. Urban development has been a primary cause for mangrove destruction and deterioration in south Florida USA for the last several decades. As a result, the restoration of mangrove forests has become an important topic of research. Using field sampling and remote-sensing we assessed the past and present hydrologic conditions of a mangrove creek and its connected mangrove forest and brackish marsh systems located on the coast of Naples Bay in southwest Florida. We concluded that the hydrology of these connected systems had been significantly altered from its natural state due to urban development. We propose here a mangrove creek restoration plan that would extend the existing creek channel 1.1 km inland through the adjacent mangrove forest and up to an adjacent brackish marsh. We then tested the hydrologic implications using a hydraulic model of the mangrove creek calibrated with tidal data from Naples Bay and water levels measured within the creek. The calibrated model was then used to simulate the resulting hydrology of our proposed restoration plan. Simulation results showed that the proposed creek extension would restore a twice-daily flooding regime to a majority of the adjacent mangrove forest and that there would still be minimal tidal influence on the brackish marsh area, keeping its salinity at an acceptable level. This study demonstrates the utility of combining field data and hydraulic modeling to aid in the design of mangrove restoration plans.

  13. Social Security Measures for Elderly Population in Delhi, India: Awareness, Utilization and Barriers.

    Science.gov (United States)

    Kohli, Charu; Gupta, Kalika; Banerjee, Bratati; Ingle, Gopal Krishna

    2017-05-01

    World population of elderly is increasing at a fast pace. The number of elderly in India has increased by 54.77% in the last 15 years. A number of social security measures have been taken by Indian government. To assess awareness, utilization and barriers faced while utilizing social security schemes by elderly in a secondary care hospital situated in a rural area in Delhi, India. A cross-sectional study was conducted among 360 individuals aged 60 years and above in a secondary care hospital situated in a rural area in Delhi. A pre-tested, semi-structured schedule prepared in local language was used. Data was analysed using SPSS software (version 17.0). Chi-square test was used to observe any statistical association between categorical variables. The results were considered statistically significant if p-value was less than 0.05. A majority of study subjects were females (54.2%), Hindu (89.7%), married (60.3%) and were not engaged in any occupation (82.8%). Awareness about Indira Gandhi National Old Age Pension Scheme (IGNOAPS) was present among 286 (79.4%) and Annapurna scheme in 193 (53.6%) subjects. Among 223 subjects who were below poverty line, 179 (80.3%) were aware of IGNOAPS; while, 112 (50.2%) were utilizing the scheme. There was no association of awareness with education status, occupation, religion, family type, marital status and caste (p>0.05). Corruption and tedious administrative formalities were major barriers reported. Awareness generation, provision of information on how to approach the concerned authority for utilizing the scheme and ease of administrative procedures should be an integral part of any social security scheme or measure. In the present study, about 79.4% of elderly were aware and 45% of the eligible subjects were utilizing pension scheme. Major barriers reported in utilization of schemes were corruption and tedious administrative procedures.

  14. Applications of utility theory in the economic evaluation of health care

    NARCIS (Netherlands)

    H. Bleichrodt (Han)

    1996-01-01

    textabstractThis thesis studies the applicability of quality-adjusted life years (QALYs) and other utility based outcome measures in medical decision making and health economics. The main conclusion will be that utility based measures are more useful to model health related behaviour than has

  15. Job stress and mental health of permanent and fixed-term workers measured by effort-reward imbalance model, depressive complaints, and clinic utilization.

    Science.gov (United States)

    Inoue, Mariko; Tsurugano, Shinobu; Yano, Eiji

    2011-01-01

    The number of workers with precarious employment has increased globally; however, few studies have used validated measures to investigate the relationship of job status to stress and mental health. Thus, we conducted a study to compare differential job stress experienced by permanent and fixed-term workers using an effort-reward imbalance (ERI) model questionnaire, and by evaluating depressive complaints and clinic utilization. Subjects were permanent or fixed-term male workers at a Japanese research institute (n=756). Baseline data on job stress and depressive complaints were collected in 2007. We followed up with the same population over a 1-year period to assess their utilization of the company clinic for mental health concerns. The ERI ratio was higher among permanent workers than among fixed-term workers. More permanent workers presented with more than two depressive complaints, which is the standard used for the diagnosis of depression. ERI scores indicated that the effort component of permanent work was associated with distress, whereas distress in fixed-term work was related to job promotion and job insecurity. Moreover, over the one-year follow-up period, fixed-term workers visited the on-site clinic for mental concerns 4.04 times more often than permanent workers even after adjusting for age, lifestyle, ERI, and depressive complaints. These contrasting findings reflect the differential workloads and working conditions encountered by permanent and fixed-term workers. The occupational setting where employment status was intermingled, may have contributed to the high numbers of mental health-related issues experienced by workers with different employment status.

  16. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    Science.gov (United States)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  17. Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2016-02-01

    All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.

  18. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  19. Assessment of the biophysical impacts of utility-scale photovoltaics through observations and modelling

    Science.gov (United States)

    Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.

    2017-12-01

    Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.

  20. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  1. Model franchise agreements with public utilities. Musterkonzessionsvertraege mit Energieversorgungsunternehmen

    Energy Technology Data Exchange (ETDEWEB)

    Menking, C. (Niedersaechsischer Staedte- und Gemeindebund, Hannover (Germany, F.R.))

    1989-01-01

    In 1987, the Committee of Town and Community Administrations of Lower Saxonia established the task force 'Franchise Agreements'. This is a forum where town and community officials interested in energy issues cooperate. The idea was to improve conditions and participation possibilities for local administrations in contracts with their present utilities, and to draw up, and coordinate with the utilities, a franchise agreement creating possibilities for the communities, inter alia, in the sectors power supply concept, advising on energy conservation, energy generation. A model of a franchise agreement for the electricity sector is presented in its full wording. (orig./HSCH).

  2. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  3. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of Inflatable Modules

    Science.gov (United States)

    Mohammed, Anil

    2011-01-01

    This paper focuses on integrating a large hatch penetration into inflatable modules of various constructions. This paper also compares load predictions with test measurements. The strain was measured by utilizing photogrammetric methods and strain gages mounted to select clevises that interface with the structural webbings. Bench testing showed good correlation between strain data collected from an extensometer and photogrammetric measurements, even when the material transitioned from the low load to high load strain region of the curve. The full-scale torus design module showed mixed results as well in the lower load and high strain regions. After thorough analysis of photogrammetric measurements, strain gage measurements, and predicted load, the photogrammetric measurements seem to be off by a factor of two.

  4. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  5. Adolescent idiopathic scoliosis screening for school, community, and clinical health promotion practice utilizing the PRECEDE-PROCEED model

    Directory of Open Access Journals (Sweden)

    Wyatt Lawrence A

    2005-11-01

    Full Text Available Abstract Background Screening for adolescent idiopathic scoliosis (AIS is a commonly performed procedure for school children during the high risk years. The PRECEDE-PROCEDE (PP model is a health promotion planning model that has not been utilized for the clinical diagnosis of AIS. The purpose of this research is to study AIS in the school age population using the PP model and its relevance for community, school, and clinical health promotion. Methods MEDLINE was utilized to locate AIS data. Studies were screened for relevance and applicability under the auspices of the PP model. Where data was unavailable, expert opinion was utilized based on consensus. Results The social assessment of quality of life is limited with few studies approaching the long-term effects of AIS. Epidemiologically, AIS is the most common form of scoliosis and leading orthopedic problem in children. Behavioral/environmental studies focus on discovering etiologic relationships yet this data is confounded because AIS is not a behavioral. Illness and parenting health behaviors can be appreciated. The educational diagnosis is confounded because AIS is an orthopedic disorder and not behavioral. The administration/policy diagnosis is hindered in that scoliosis screening programs are not considered cost-effective. Policies are determined in some schools because 26 states mandate school scoliosis screening. There exists potential error with the Adam's test. The most widely used measure in the PP model, the Health Belief Model, has not been utilized in any AIS research. Conclusion The PP model is a useful tool for a comprehensive study of a particular health concern. This research showed where gaps in AIS research exist suggesting that there may be problems to the implementation of school screening. Until research disparities are filled, implementation of AIS screening by school, community, and clinical health promotion will be compromised. Lack of data and perceived importance by

  6. Functional outcome measures in a surgical model of hip osteoarthritis in dogs

    OpenAIRE

    Little, Dianne; Johnson, Stephen; Hash, Jonathan; Olson, Steven A.; Estes, Bradley T.; Moutos, Franklin T.; Lascelles, B. Duncan X.; Guilak, Farshid

    2016-01-01

    Background The hip is one of the most common sites of osteoarthritis in the body, second only to the knee in prevalence. However, current animal models of hip osteoarthritis have not been assessed using many of the functional outcome measures used in orthopaedics, a characteristic that could increase their utility in the evaluation of therapeutic interventions. The canine hip shares similarities with the human hip, and functional outcome measures are well documented in veterinary medicine, pr...

  7. High-resolution land surface modeling utilizing remote sensing parameters and the Noah UCM: a case study in the Los Angeles Basin

    Science.gov (United States)

    Vahmani, P.; Hogue, T. S.

    2014-12-01

    In the current work we investigate the utility of remote-sensing-based surface parameters in the Noah UCM (urban canopy model) over a highly developed urban area. Landsat and fused Landsat-MODIS data are utilized to generate high-resolution (30 m) monthly spatial maps of green vegetation fraction (GVF), impervious surface area (ISA), albedo, leaf area index (LAI), and emissivity in the Los Angeles metropolitan area. The gridded remotely sensed parameter data sets are directly substituted for the land-use/lookup-table-based values in the Noah-UCM modeling framework. Model performance in reproducing ET (evapotranspiration) and LST (land surface temperature) fields is evaluated utilizing Landsat-based LST and ET estimates from CIMIS (California Irrigation Management Information System) stations as well as in situ measurements. Our assessment shows that the large deviations between the spatial distributions and seasonal fluctuations of the default and measured parameter sets lead to significant errors in the model predictions of monthly ET fields (RMSE = 22.06 mm month-1). Results indicate that implemented satellite-derived parameter maps, particularly GVF, enhance the capability of the Noah UCM to reproduce observed ET patterns over vegetated areas in the urban domains (RMSE = 11.77 mm month-1). GVF plays the most significant role in reproducing the observed ET fields, likely due to the interaction with other parameters in the model. Our analysis also shows that remotely sensed GVF and ISA improve the model's capability to predict the LST differences between fully vegetated pixels and highly developed areas.

  8. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  9. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  10. Model project to promote cultivation and utilization of renewable resources. Modellvorhaben zur Foerderung des Anbaus und der Verwertung nachwachsender Rohstoffe

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    This revised report on the model projects presents individual projects and measures complementary to each other, documenting, in their totality, an advanced state of development. Moreover it shows the following: that the basic challenge of a model project, especially in the field of the energetic use of biomass, can be met by marrying agriculture to power utilities. So, projects are under way where cultivation of China reed and its utilization in power-and-heat cogeneration plants will, in the future, complement each other. Further questions that are not represented in the research programme of Lower Saxonia are dealt with at the federal level, so that the field of renewable resurces may currently be considered as comprehensively covered. (orig./EF).

  11. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  12. Cross-bridge blocker BTS permits direct measurement of SR Ca2+ pump ATP utilization in toadfish swimbladder muscle fibers.

    Science.gov (United States)

    Young, Iain S; Harwood, Claire L; Rome, Lawrence C

    2003-10-01

    Because the major processes involved in muscle contraction require rapid utilization of ATP, measurement of ATP utilization can provide important insights into the mechanisms of contraction. It is necessary, however, to differentiate between the contribution made by cross-bridges and that of the sarcoplasmic reticulum (SR) Ca2+ pumps. Specific and potent SR Ca2+ pump blockers have been used in skinned fibers to permit direct measurement of cross-bridge ATP utilization. Up to now, there was no analogous cross-bridge blocker. Recently, N-benzyl-p-toluene sulfonamide (BTS) was found to suppress force generation at micromolar concentrations. We tested whether BTS could be used to block cross-bridge ATP utilization, thereby permitting direct measurement of SR Ca2+ pump ATP utilization in saponin-skinned fibers. At 25 microM, BTS virtually eliminates force and cross-bridge ATP utilization (both BTS. At 25 microM, BTS had no effect on SR pump ATP utilization. Hence, we used BTS to make some of the first direct measurements of ATP utilization of intact SR over a physiological range of [Ca2+]at 15 degrees C. Curve fits to SR Ca2+ pump ATP utilization vs. pCa indicate that they have much lower Hill coefficients (1.49) than that describing cross-bridge force generation vs. pCa (approximately 5). Furthermore, we found that BTS also effectively eliminates force generation in bundles of intact swimbladder muscle, suggesting that it will be an important tool for studying integrated SR function during normal motor behavior.

  13. Modelling of limestone injection for SO2 capture in a coal fired utility boiler

    International Nuclear Information System (INIS)

    Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.

    1997-01-01

    A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented

  14. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  15. A System Dynamics Approach to Modeling the Sensitivity of Inappropriate Emergency Department Utilization

    Science.gov (United States)

    Behr, Joshua G.; Diaz, Rafael

    Non-urgent Emergency Department utilization has been attributed with increasing congestion in the flow and treatment of patients and, by extension, conditions the quality of care and profitability of the Emergency Department. Interventions designed to divert populations to more appropriate care may be cautiously received by operations managers due to uncertainty about the impact an adopted intervention may have on the two values of congestion and profitability. System Dynamics (SD) modeling and simulation may be used to measure the sensitivity of these two, often-competing, values of congestion and profitability and, thus, provide an additional layer of information designed to inform strategic decision making.

  16. Explaining Distortions in Utility Elicitation through the Rank-Dependent Model for Risky Choices

    NARCIS (Netherlands)

    P.P. Wakker (Peter); A.M. Stiggelbout (Anne)

    1995-01-01

    textabstractThe standard gamble (SG) method has been accepted as the gold standard for the elicitation of utility when risk or uncertainty is involved in decisions, and thus for the measurement of utility in medical decisions. Unfortunately, the SG method is distorted by a general dislike for

  17. Longitudinal predictive ability of mapping models: examining post-intervention EQ-5D utilities derived from baseline MHAQ data in rheumatoid arthritis patients.

    Science.gov (United States)

    Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris

    2013-04-01

    The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context

  18. Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.

    Science.gov (United States)

    Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam

    2017-11-01

    The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.

  19. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  20. Modeling and measurement of the ALS U5 undulator end magnetic structures

    International Nuclear Information System (INIS)

    Humphries, D.; Halbach, K.; Hoyer, E.; Kincaid, B.; Marks, S.; Schlueter, R.

    1993-05-01

    The end structures for the ALS U5.0 undulators utilize a system of dual permanent magnet rotors intended to establish gap independent field performance. They may also be used for tuning of the first and second magnetic field integrals of these devices. The behavior of these structures has been studied by means of a two dimensional modeling with the POISSON Group of computer codes. A parametric study of the magnetic field distribution and first and second integrals of the fields has been conducted. In parallel, magnetic measurements of the final completed structures have been performed using an automated Hall probe measurement system. Results of the modeling and measurements are compared. Implications for tuning of the ends of the devices within the context of the electron beam parameters of the ALS are discussed

  1. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  2. "Utilizing" signal detection theory.

    Science.gov (United States)

    Lynn, Spencer K; Barrett, Lisa Feldman

    2014-09-01

    What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.

  3. On how access to an insurance market affects investments in safety measures, based on the expected utility theory

    International Nuclear Information System (INIS)

    Bjorheim Abrahamsen, Eirik; Asche, Frank

    2011-01-01

    This paper focuses on how access to an insurance market should influence investments in safety measures in accordance with the ruling paradigm for decision-making under uncertainty-the expected utility theory. We show that access to an insurance market in most situations will influence investments in safety measures. For an expected utility maximizer, an overinvestment in safety measures is likely if access to an insurance market is ignored, while an underinvestment in safety measures is likely if insurance is purchased without paying attention to the possibility for reducing the probability and/or consequences of an accidental event by safety measures.

  4. Validation of Storm Water Management Model Storm Control Measures Modules

    Science.gov (United States)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  5. The Sustainable Energy Utility (SEU) Model for Energy Service Delivery

    Science.gov (United States)

    Houck, Jason; Rickerson, Wilson

    2009-01-01

    Climate change, energy price spikes, and concerns about energy security have reignited interest in state and local efforts to promote end-use energy efficiency, customer-sited renewable energy, and energy conservation. Government agencies and utilities have historically designed and administered such demand-side measures, but innovative…

  6. High resolution land surface modeling utilizing remote sensing parameters and the Noah-UCM: a case study in the Los Angeles Basin

    Science.gov (United States)

    Vahmani, P.; Hogue, T. S.

    2014-07-01

    In the current work we investigate the utility of remote sensing based surface parameters in the Noah-UCM (urban canopy model) over a highly developed urban area. Landsat and fused Landsat-MODIS data are utilized to generate high resolution (30 m) monthly spatial maps of green vegetation fraction (GVF), impervious surface area (ISA), albedo, leaf area index (LAI), and emissivity in the Los Angeles metropolitan area. The gridded remotely sensed parameter datasets are directly substituted for the land-use/lookup-table values in the Noah-UCM modeling framework. Model performance in reproducing ET (evapotranspiration) and LST (land surface temperature) fields is evaluated utilizing Landsat-based LST and ET estimates from CIMIS (California Irrigation Management Information System) stations as well as in-situ measurements. Our assessment shows that the large deviations between the spatial distributions and seasonal fluctuations of the default and measured parameter sets lead to significant errors in the model predictions of monthly ET fields (RMSE = 22.06 mm month-1). Results indicate that implemented satellite derived parameter maps, particularly GVF, enhance the Noah-UCM capability to reproduce observed ET patterns over vegetated areas in the urban domains (RMSE = 11.77 mm month-1). GVF plays the most significant role in reproducing the observed ET fields, likely due to the interaction with other parameters in the model. Our analysis also shows that remotely sensed GVF and ISA improve the model capability to predict the LST differences between fully vegetated pixels and highly developed areas. However, the model still underestimates remotely sensed LST values over highly developed areas. We hypothesize that the LST underestimation is due to structural formulation in the UCM and cannot be immediately solved with available parameter choices.

  7. Validation of the SF-6D Health State Utilities Measure in Lower Extremity Sarcoma

    Directory of Open Access Journals (Sweden)

    Kenneth R. Gundle

    2014-01-01

    Full Text Available Aim. Health state utilities measures are preference-weighted patient-reported outcome (PRO instruments that facilitate comparative effectiveness research. One such measure, the SF-6D, is generated from the Short Form 36 (SF-36. This report describes a psychometric evaluation of the SF-6D in a cross-sectional population of lower extremity sarcoma patients. Methods. Patients with lower extremity sarcoma from a prospective database who had completed the SF-36 and Toronto Extremity Salvage Score (TESS were eligible for inclusion. Computed SF-6D health states were given preference weights based on a prior valuation. The primary outcome was correlation between the SF-6D and TESS. Results. In 63 pairs of surveys in a lower extremity sarcoma population, the mean preference-weighted SF-6D score was 0.59 (95% CI 0.4–0.81. The distribution of SF-6D scores approximated a normal curve (skewness = 0.11. There was a positive correlation between the SF-6D and TESS (r=0.75, P<0.01. Respondents who reported walking aid use had lower SF-6D scores (0.53 versus 0.61, P=0.03. Five respondents underwent amputation, with lower SF-6D scores that approached significance (0.48 versus 0.6, P=0.06. Conclusions. The SF-6D health state utilities measure demonstrated convergent validity without evidence of ceiling or floor effects. The SF-6D is a health state utilities measure suitable for further research in sarcoma patients.

  8. The utilization of cranial models created using rapid prototyping techniques in the development of models for navigation training.

    Science.gov (United States)

    Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A

    2014-01-01

    Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.

  9. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  10. Detailed assessment of diesel spray atomization models using visible and X-ray extinction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Magnotti, G.M.; Genzale, C.L. (GIT)

    2017-12-01

    The physical mechanisms characterizing the breakup of a diesel spray into droplets are still unknown. This gap in knowledge has largely been due to the challenges of directly imaging this process or quantitatively measuring the outcomes of spray breakup, such as droplet size. Recent x-ray measurements by Argonne National Laboratory, utilized in this work, provide needed information about the spatial evolution of droplet sizes in selected regions of the spray under a range of injection pressures (50–150 MPa) and ambient densities (7.6–22.8 kg/m3) relevant for diesel operating conditions. Ultra-small angle x-ray scattering (USAXS) measurements performed at the Advanced Photon Source are presented, which quantify Sauter mean diameters (SMD) within optically thick regions of the spray that are inaccessible by conventional droplet sizing measurement techniques, namely in the near-nozzle region, along the spray centerline, and within the core of the spray. To quantify droplet sizes along the periphery of the spray, a complementary technique is proposed and introduced, which leverages the ratio of path-integrated x-ray and visible laser extinction (SAMR) measurements to quantify SMD. The SAMR and USAXS measurements are then utilized to evaluate current spray models used for engine computational fluid dynamic (CFD) simulations. We explore the ability of a carefully calibrated spray model, premised on aerodynamic wave growth theory, to capture the experimentally observed trends of SMD throughout the spray. The spray structure is best predicted with an aerodynamic primary and secondary breakup process that is represented with a slower time constant and larger formed droplet size than conventionally recommended for diesel spray models. Additionally, spray model predictions suggest that droplet collisions may not influence the resultant droplet size distribution along the spray centerline in downstream regions of the spray.

  11. Utility of the Canadian Occupational Performance Measure as an admission and outcome measure in interdisciplinary community-based geriatric rehabilitation

    DEFF Research Database (Denmark)

    Larsen, Anette Enemark; Carlsson, Gunilla

    2012-01-01

    In a community-based geriatric rehabilitation project, the Canadian Occupational Performance Measure (COPM) was used to develop a coordinated, interdisciplinary, and client-centred approach focusing on occupational performance. The purpose of this study was to evaluate the utility of the COPM as ...... physician, home care, occupational therapy, physiotherapy...

  12. Utilized social support and self-esteem mediate the relationship between perceived social support and suicide ideation. A test of a multiple mediator model.

    Science.gov (United States)

    Kleiman, Evan M; Riskind, John H

    2013-01-01

    While perceived social support has received considerable research as a protective factor for suicide ideation, little attention has been given to the mechanisms that mediate its effects. We integrated two theoretical models, Joiner's (2005) interpersonal theory of suicide and Leary's (Leary, Tambor, Terdal, & Downs, 1995) sociometer theory of self-esteem to investigate two hypothesized mechanisms, utilization of social support and self-esteem. Specifically, we hypothesized that individuals must utilize the social support they perceive that would result in increased self-esteem, which in turn buffers them from suicide ideation. Participants were 172 college students who completed measures of social support, self-esteem, and suicide ideation. Tests of simple mediation indicate that utilization of social support and self-esteem may each individually help to mediate the perceived social support/suicide ideation relationship. Additionally, a test of multiple mediators using bootstrapping supported the hypothesized multiple-mediator model. The use of a cross-sectional design limited our ability to find true cause-and-effect relationships. Results suggested that utilized social support and self-esteem both operate as individual moderators in the social support/self-esteem relationship. Results further suggested, in a comprehensive model, that perceived social support buffers suicide ideation through utilization of social support and increases in self-esteem.

  13. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  14. Implementation of energy efficiency measures by municipal utilities; Umsetzung von Energieeffizienzmassnahmen durch Stadtwerke

    Energy Technology Data Exchange (ETDEWEB)

    Horst, Juri; Droeschel, Barbara [Institut fuer ZukunftsEnergieSysteme (IZES), Saarbruecken (Germany)

    2012-04-15

    Local players have a very special role to fill in the implementation of the German federal government's ambitious energy efficiency goals. In the past the contributions made by municipal utilities in the way of special offers or measures to develop efficiency potentials were only modest. Moreover there were specific impediments that discouraged a significant competition-driven efficiency services market from developing. However, there are other instruments available that could encourage municipal utilities to implement efficiency goals. A recent research project has shown how standardised efficiency programmes can be used to tap into existing efficiency potentials at a sufficient level of intensity and with macroeconomic benefit.

  15. A simple method for measuring glucose utilization of insulin-sensitive tissues by using the brain as a reference

    International Nuclear Information System (INIS)

    Namba, Hiroki; Nakagawa, Keiichi; Iyo, Masaomi; Fukushi, Kiyoshi; Irie, Toshiaki

    1994-01-01

    A simple method, without measurement of the plasma input function, to obtain semiquantitative values of glucose utilization in tissues other than the brain with radioactive deoxyglucose is reported. The brain, in which glucose utilization is essentially insensitive to plasma glucose and insulin concentrations, was used as an internal reference. The effects of graded doses of oral glucose loading (0.5, 1 and 2 mg/g body weight) on insulin-sensitive tissues (heart, muscle and fat tissue) were studied in the rat. By using the brain-reference method, dose-dependent increases in glucose utilization were clearly shown in all the insulin-sensitive tissues examined. The method seems to be of value for measurement of glucose utilization using radioactive deoxyglucose and positron emission tomography in the heart or other insulin-sensitive tissues, especially during glucose loading. (orig.)

  16. [Thermal energy utilization analysis and energy conservation measures of fluidized bed dryer].

    Science.gov (United States)

    Xing, Liming; Zhao, Zhengsheng

    2012-07-01

    To propose measures for enhancing thermal energy utilization by analyzing drying process and operation principle of fluidized bed dryers,in order to guide optimization and upgrade of fluidized bed drying equipment. Through a systematic analysis on drying process and operation principle of fluidized beds,the energy conservation law was adopted to calculate thermal energy of dryers. The thermal energy of fluidized bed dryers is mainly used to make up for thermal consumption of water evaporation (Qw), hot air from outlet equipment (Qe), thermal consumption for heating and drying wet materials (Qm) and heat dissipation to surroundings through hot air pipelines and cyclone separators. Effective measures and major approaches to enhance thermal energy utilization of fluidized bed dryers were to reduce exhaust gas out by the loss of heat Qe, recycle dryer export air quantity of heat, preserve heat for dry towers, hot air pipes and cyclone separators, dehumidify clean air in inlets and reasonably control drying time and air temperature. Such technical parameters such air supply rate, air inlet temperature and humidity, material temperature and outlet temperature and humidity are set and controlled to effectively save energy during the drying process and reduce the production cost.

  17. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  18. Comparison of models and measurements of angle-resolved scatter from irregular aerosols

    International Nuclear Information System (INIS)

    Milstein, Adam B.; Richardson, Jonathan M.

    2015-01-01

    We have developed and validated a method for modeling the elastic scattering properties of biological and inert aerosols of irregular shape at near- and mid-wave infrared wavelengths. The method, based on Gaussian random particles, calculates the ensemble-average optical cross section and Mueller scattering matrix, using the measured aerodynamic size distribution and previously-reported refractive index as inputs. The utility of the Gaussian particle model is that it is controlled by only two parameters (σ and Γ) which we have optimized such that the model best reproduces the full angle-resolved Mueller scattering matrices measured at λ=1.55 µm in the Standoff Aerosol Active Signature Testbed (SAAST). The method has been applied to wet-generated singlet biological spore samples, dry-generated biological spore clusters, and kaolin. The scattering computation is performed using the Discrete Dipole Approximation (DDA), which requires significant computational resources, and is thus implemented on LLGrid, a large parallel grid computer. For the cases presented, the best fit Gaussian particle model is in good qualitative correspondence with microscopy images of the corresponding class of particles. The measured and computed cross sections agree well within a factor of two overall, with certain cases bearing closer correspondence. In particular, the DDA reproduces the shape of the measured scatter function more accurately than Mie predictions. The DDA-computed depolarization factors are also in good agreement with measurement. - Highlights: • We model elastic scattering of biological and inert aerosols of irregular shape. • We calculate cross sections and Mueller matrix using random particle shape model. • Scatter models employ refractive index and measured size distribution as inputs. • Discrete dipole approximation (DDA) with parallelization enables model calculations. • DDA-modeled cross section and Mueller matrix agree well with measurements at 1.55 μm

  19. Utility of Small Animal Models of Developmental Programming.

    Science.gov (United States)

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  20. A new modelling framework and mitigation measures for increased resilience to flooding

    Science.gov (United States)

    Valyrakis, Manousos; Alexakis, Athanasios; Solley, Mark

    2015-04-01

    Flooding in rivers and estuaries is amongst the most significant challenges our society has yet to tackle effectively. Use of floodwall systems is one of the potential measures that can be used to mitigate the detrimental socio-economical and ecological impacts and alleviate the associated costs of flooding. This work demonstrates the utility of such systems for a case study via appropriate numerical simulations, in addition to conducting scaled flume experiments towards obtaining a better understanding of the performance and efficiency of the flood-wall systems. At first, the results of several characteristic inundation modeling scenarios and flood mitigation options, for a flood-prone region in Scotland. In particular, the history and hydrology of the area are discussed and the assumptions and hydraulic model input (model geometry including instream hydraulic structures -such as bridges and weirs- river and floodplain roughness, initial and boundary conditions) are presented, followed by the model results. Emphasis is given on the potential improvements brought about by mitigating flood risk using flood-wall systems. Further, the implementation of the floodwall in mitigating flood risk is demonstrated via appropriate numerical modeling, utilizing HEC-RAS to simulate the effect of a river's rising stage during a flood event, for a specific area. The later part of this work involves the design, building and utilization of a scaled physical model of a flood-wall system. These experiments are carried out at one of the research flumes in the Water Engineering laboratory of the University of Glasgow. These involve an experimental investigation where the increase of force applied on the floodwall is measured for different degrees of deflection of the water in the stream, under the maximum flow discharge that can be carried through without exceeding the floodwall height (and accounting for the effect of super-elevation). These results can be considered upon the

  1. Decision model incorporating utility theory and measurement of social values applied to nuclear waste management

    International Nuclear Information System (INIS)

    Litchfield, J.W.; Hansen, J.V.; Beck, L.C.

    1975-07-01

    A generalized computer-based decision analysis model was developed and tested. Several alternative concepts for ultimate disposal have already been developed; however, significant research is still required before any of these can be implemented. To make a choice based on technical estimates of the costs, short-term safety, long-term safety, and accident detection and recovery requires estimating the relative importance of each of these factors or attributes. These relative importance estimates primarily involve social values and therefore vary from one individual to the next. The approach used was to sample various public groups to determine the relative importance of each of the factors to the public. These estimates of importance weights were combined in a decision analysis model with estimates, furnished by technical experts, of the degree to which each alternative concept achieves each of the criteria. This model then integrates the two separate and unique sources of information and provides the decision maker with information as to the preferences and concerns of the public as well as the technical areas within each concept which need further research. The model can rank the alternatives using sampled public opinion and techno-economic data. This model provides a decision maker with a structured approach to subdividing complex alternatives into a set of more easily considered attributes, measuring the technical performance of each alternative relative to each attribute, estimating relevant social values, and assimilating quantitative information in a rational manner to estimate total value for each alternative. Because of the explicit nature of this decision analysis, the decision maker can select a specific alternative supported by clear documentation and justification for his assumptions and estimates. (U.S.)

  2. Using global magnetospheric models for simulation and interpretation of Swarm external field measurements

    DEFF Research Database (Denmark)

    Moretto, T.; Vennerstrøm, Susanne; Olsen, Nils

    2006-01-01

    simulated external contributions relevant for internal field modeling. These have proven very valuable for the design and planning of the up-coming multi-satellite Swarm mission. In addition, a real event simulation was carried out for a moderately active time interval when observations from the Orsted...... it consistently underestimates the dayside region 2 currents and overestimates the horizontal ionospheric closure currents in the dayside polar cap. Furthermore, with this example we illustrate the great benefit of utilizing the global model for the interpretation of Swarm external field observations and......, likewise, the potential of using Swarm measurements to test and improve the global model....

  3. A numerical model for ultrasonic measurements of swelling and mechanical properties of a swollen PVA hydrogel.

    Science.gov (United States)

    Lohakan, M; Jamnongkan, T; Pintavirooj, C; Kaewpirom, S; Boonsang, S

    2010-08-01

    This paper presents a numerical model for the evaluation of mechanical properties of a relatively thin hydrogel. The model utilizes a system identification method to evaluate the acoustical parameters from ultrasonic measurement data. The model involves the calculation of the forward model based on an ultrasonic wave propagation incorporating diffraction effect. Ultrasonic measurements of a hydrogel are also performed in a reflection mode. A Nonlinear Least Square (NLS) algorithm is employed to minimize difference between the results from the model and the experimental data. The acoustical parameters associated with the model are effectively modified to achieve the minimum error. As a result, the parameters of PVA hydrogels namely thickness, density, an ultrasonic attenuation coefficient and dispersion velocity are effectively determined. In order to validate the model, the conventional density measurements of hydrogels were also performed. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  4. A framework for estimating health state utility values within a discrete choice experiment: modeling risky choices.

    Science.gov (United States)

    Robinson, Angela; Spencer, Anne; Moffatt, Peter

    2015-04-01

    There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.

  5. Performance Measurement of Mining Equipments by Utilizing OEE

    Directory of Open Access Journals (Sweden)

    Sermin Elevli

    2010-10-01

    Full Text Available Over the past century, open pit mines have steadily increased their production rate by using larger equipments which requireintensive capital investment. Low commodity prices have forced companies to decrease their unit cost by improving productivity. Oneway to improve productivity is to utilize equipment as effectively as possible. Therefore, the accurate estimation of equipmenteffectiveness is very important so that it can be increased. Overall Equipment Effectiveness (OEE is a well-known measurementmethod, which combines availability, performance and quality, for the evaluation of equipment effectiveness in manufacturing industry.However, there isn’t any study in literature about how to use this metric for mining equipments such as shovel, truck, drilling machineetc. This paper will discuss the application of OEE to measure effectiveness of mining equipment. It identifies causes of time losses forshovel and truck operations and introduces procedure to record time losses. The procedure to estimate OEE of shovels and trucks hasalso been presented via numerical example.

  6. Local Stability Conditions for Two Types of Monetary Models with Recursive Utility

    OpenAIRE

    Miyazaki, Kenji; Utsunomiya, Hitoshi

    2009-01-01

    This paper explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility.A monetary variant of the Brock-Gale condition provides a theoretical justification of the comparative statics analysis. One of sufficient conditions for local stability is increasing marginal impatience (IMI) in consumption and money. However, this does not deny the possibility of decreasing marginal impatience (DMI). The local stability with DMI is mor...

  7. Dopamine reward prediction error responses reflect marginal utility.

    Science.gov (United States)

    Stauffer, William R; Lak, Armin; Schultz, Wolfram

    2014-11-03

    Optimal choices require an accurate neuronal representation of economic value. In economics, utility functions are mathematical representations of subjective value that can be constructed from choices under risk. Utility usually exhibits a nonlinear relationship to physical reward value that corresponds to risk attitudes and reflects the increasing or decreasing marginal utility obtained with each additional unit of reward. Accordingly, neuronal reward responses coding utility should robustly reflect this nonlinearity. In two monkeys, we measured utility as a function of physical reward value from meaningful choices under risk (that adhered to first- and second-order stochastic dominance). The resulting nonlinear utility functions predicted the certainty equivalents for new gambles, indicating that the functions' shapes were meaningful. The monkeys were risk seeking (convex utility function) for low reward and risk avoiding (concave utility function) with higher amounts. Critically, the dopamine prediction error responses at the time of reward itself reflected the nonlinear utility functions measured at the time of choices. In particular, the reward response magnitude depended on the first derivative of the utility function and thus reflected the marginal utility. Furthermore, dopamine responses recorded outside of the task reflected the marginal utility of unpredicted reward. Accordingly, these responses were sufficient to train reinforcement learning models to predict the behaviorally defined expected utility of gambles. These data suggest a neuronal manifestation of marginal utility in dopamine neurons and indicate a common neuronal basis for fundamental explanatory constructs in animal learning theory (prediction error) and economic decision theory (marginal utility). Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Fiscal 1995 coal production/utilization technology promotion subsidy/clean coal technology promotion business/regional model survey. Study report on `Environmental load reduction measures: feasibility study of a coal utilization eco/energy supply system` (interim report); 1995 nendo sekitan seisan riyo gijutsu shinkohi hojokin clean coal technology suishin jigyo chiiki model chosa. `Kankyo fuka teigen taisaku: sekitan riyo eko energy kyokyu system no kanosei chosa` chosa hokokusho (chukan hokoku)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    The coal utilization is expected to make substantial growth according to the long-term energy supply/demand plan. To further expand the future coal utilization, however, it is indispensable to reduce environmental loads in its total use with other energies, based on the coal use. In this survey, a regional model survey was conducted as environmental load reduction measures using highly cleaned coal which were taken in fiscal 1993 and 1994. Concretely, a model system was assumed which combined facilities for mixed combustion with coal and other energy (hull, bagasse, waste, etc.) and facilities for effective use of burned ash, and potential reduction in environmental loads of the model system was studied. The technology of mixed combustion between coal and other energy is still in a developmental stage with no novelties in the country. Therefore, the mixed combustion technology between coal and other energy is an important field which is very useful for the future energy supply/demand and environmental issues. 34 refs., 27 figs., 48 tabs.

  9. Aerial measuring system sensor modeling

    International Nuclear Information System (INIS)

    Detwiler, Rebecca

    2002-01-01

    The AMS fixed-wing and rotary-wing systems are critical National Nuclear Security Administration (NNSA) Emergency Response assets. This project is principally focused on the characterization of the sensors utilized with these systems via radiation transport calculations. The Monte Carlo N-Particle code (MCNP) which has been developed at Los Alamos National Laboratory was used to model the detector response of the AMS fixed wing and helicopter systems. To validate the calculations, benchmark measurements were made for simple source-detector configurations. The fixed-wing system is an important tool in response to incidents involving the release of mixed fission products (a commercial power reactor release), the threat or actual explosion of a Radiological Dispersal Device, and the loss or theft of a large industrial source (a radiography source). Calculations modeled the spectral response for the sensors contained, a 3-element NaI detector pod and HpGe detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photo-peak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 C i/m2

  10. Cost-utility model of rasagiline in the treatment of advanced Parkinson's disease in Finland.

    Science.gov (United States)

    Hudry, Joumana; Rinne, Juha O; Keränen, Tapani; Eckert, Laurent; Cochran, John M

    2006-04-01

    The economic burden of Parkinson's disease (PD) is high, especially in patients experiencing motor fluctuations. Rasagiline has demonstrated efficacy against symptoms of PD in early and advanced stages of the disease. To assess the cost-utility of rasagiline and entacapone as adjunctive therapies to levodopa versus standard levodopa care in PD patients with motor fluctuations in Finland. A 2 year probabilistic Markov model with 3 health states: "25% or less off-time/day," "greater than 25% off-time/day," and "dead" was used. Off-time represents time awake with poor or absent motor function. Model inputs included transition probabilities from randomized clinical trials, utilities from a preference measurement study, and costs and resources from a Finnish cost-of-illness study. Effectiveness measures were quality-adjusted life years (QALYs) and number of months spent with 25% or less off-time/day. Uncertainty around parameters was taken into account by Monte Carlo simulations. Over 2 years from a societal perspective, rasagiline or entacapone as adjunctive therapies to levodopa showed greater effectiveness than levodopa alone at no additional costs. Benefits after 2 years were 0.13 (95% CI 0.08 to 0.17) additional QALYs and 5.2 (3.6 to 6.7) additional months for rasagiline and 0.12 (0.08 to 0.17) QALYs and 5.1 (3.5 to 6.6) months for entacapone, both in adjunct to levodopa compared with levodopa alone. The results of this study support the use of rasagiline and entacapone as adjunctive cost-effective alternatives to levodopa alone in PD patients with motor fluctuations in Finland. With a different mode of action, rasagiline is a valuable therapeutic alternative to entacapone at no additional charge to society.

  11. Animal models of myasthenia gravis: utility and limitations

    Science.gov (United States)

    Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio

    2016-01-01

    Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601

  12. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  13. Simulations and measurements of adiabatic annular flows in triangular, tight lattice nuclear fuel bundle model

    Energy Technology Data Exchange (ETDEWEB)

    Saxena, Abhishek, E-mail: asaxena@lke.mavt.ethz.ch [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Zboray, Robert [Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland); Prasser, Horst-Michael [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)

    2016-04-01

    High conversion light water reactors (HCLWR) having triangular, tight-lattice fuels bundles could enable improved fuel utilization compared to present day LWRs. However, the efficient cooling of a tight lattice bundle has to be still proven. Major concern is the avoidance of high-quality boiling crisis (film dry-out) by the use of efficient functional spacers. For this reason, we have carried out experiments on adiabatic, air-water annular two-phase flows in a tight-lattice, triangular fuel bundle model using generic spacers. A high-spatial-resolution, non-intrusive measurement technology, cold neutron tomography, has been utilized to resolve the distribution of the liquid film thickness on the virtual fuel pin surfaces. Unsteady CFD simulations have also been performed to replicate and compare with the experiments using the commercial code STAR-CCM+. Large eddies have been resolved on the grid level to capture the dominant unsteady flow features expected to drive the liquid film thickness distribution downstream of a spacer while the subgrid scales have been modeled using the Wall Adapting Local Eddy (WALE) subgrid model. A Volume of Fluid (VOF) method, which directly tracks the interface and does away with closure relationship models for interfacial exchange terms, has also been employed. The present paper shows first comparison of the measurement with the simulation results.

  14. Dew point measurement technique utilizing fiber cut reflection

    Science.gov (United States)

    Kostritskii, S. M.; Dikevich, A. A.; Korkishko, Yu. N.; Fedorov, V. A.

    2009-05-01

    The fiber optical dew point hygrometer based on change of reflection coefficient for fiber cut has been developed and examined. We proposed and verified the model of condensation detector functioning principle. Experimental frost point measurements on air with different frost points have been performed.

  15. Functional outcome measures in a surgical model of hip osteoarthritis in dogs.

    Science.gov (United States)

    Little, Dianne; Johnson, Stephen; Hash, Jonathan; Olson, Steven A; Estes, Bradley T; Moutos, Franklin T; Lascelles, B Duncan X; Guilak, Farshid

    2016-12-01

    The hip is one of the most common sites of osteoarthritis in the body, second only to the knee in prevalence. However, current animal models of hip osteoarthritis have not been assessed using many of the functional outcome measures used in orthopaedics, a characteristic that could increase their utility in the evaluation of therapeutic interventions. The canine hip shares similarities with the human hip, and functional outcome measures are well documented in veterinary medicine, providing a baseline for pre-clinical evaluation of therapeutic strategies for the treatment of hip osteoarthritis. The purpose of this study was to evaluate a surgical model of hip osteoarthritis in a large laboratory animal model and to evaluate functional and end-point outcome measures. Seven dogs were subjected to partial surgical debridement of cartilage from one femoral head. Pre- and postoperative pain and functional scores, gait analysis, radiographs, accelerometry, goniometry and limb circumference were evaluated through a 20-week recovery period, followed by histological evaluation of cartilage and synovium. Animals developed histological and radiographic evidence of osteoarthritis, which was correlated with measurable functional impairment. For example, Mankin scores in operated limbs were positively correlated to radiographic scores but negatively correlated to range of motion, limb circumference and 20-week peak vertical force. This study demonstrates that multiple relevant functional outcome measures can be used successfully in a large laboratory animal model of hip osteoarthritis. These measures could be used to evaluate relative efficacy of therapeutic interventions relevant to human clinical care.

  16. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    Science.gov (United States)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  17. Resource utilization during software development

    Science.gov (United States)

    Zelkowitz, Marvin V.

    1988-01-01

    This paper discusses resource utilization over the life cycle of software development and discusses the role that the current 'waterfall' model plays in the actual software life cycle. Software production in the NASA environment was analyzed to measure these differences. The data from 13 different projects were collected by the Software Engineering Laboratory at NASA Goddard Space Flight Center and analyzed for similarities and differences. The results indicate that the waterfall model is not very realistic in practice, and that as technology introduces further perturbations to this model with concepts like executable specifications, rapid prototyping, and wide-spectrum languages, we need to modify our model of this process.

  18. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  19. Entropy-optimal weight constraint elicitation with additive multi-attribute utility models

    NARCIS (Netherlands)

    Valkenhoef , van Gert; Tervonen, Tommi

    2016-01-01

    We consider the elicitation of incomplete preference information for the additive utility model in terms of linear constraints on the weights. Eliciting incomplete preferences using holistic pair-wise judgments is convenient for the decision maker, but selecting the best pair-wise comparison is

  20. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  1. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    Science.gov (United States)

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  2. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    International Nuclear Information System (INIS)

    Wu Jianjun; Gao Ziyou; Sun Huijun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.

  3. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  4. Utility of noninvasive transcutaneous measurement of postoperative hemoglobin in total joint arthroplasty patients.

    Science.gov (United States)

    Stoesz, Michael; Wood, Kristin; Clark, Wesley; Kwon, Young-Min; Freiberg, Andrew A

    2014-11-01

    This study prospectively evaluated the clinical utility of a noninvasive transcutaneous device for postoperative hemoglobin measurement in 100 total hip and knee arthroplasty patients. A protocol to measure hemoglobin noninvasively, prior to venipuncture, successfully avoided venipuncture in 73% of patients. In the remaining 27 patients, there were a total of 48 venipunctures performed during the postoperative hospitalization period due to reasons including transcutaneous hemoglobin measurement less than or equal to 9 g/dL (19), inability to obtain a transcutaneous hemoglobin measurement (8), clinical signs of anemia (3), and noncompliance with the study protocol (18). Such screening protocols may provide a convenient and cost-effective alternative to routine venipuncture for identifying patients at risk for blood transfusion after elective joint arthroplasty. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Using Neural Data to Test A Theory of Investor Behavior: An Application to Realization Utility.

    Science.gov (United States)

    Frydman, Cary; Barberis, Nicholas; Camerer, Colin; Bossaerts, Peter; Rangel, Antonio

    2014-04-01

    We use measures of neural activity provided by functional magnetic resonance imaging (fMRI) to test the "realization utility" theory of investor behavior, which posits that people derive utility directly from the act of realizing gains and losses. Subjects traded stocks in an experimental market while we measured their brain activity. We find that all subjects exhibit a strong disposition effect in their trading, even though it is suboptimal. Consistent with the realization utility explanation for this behavior, we find that activity in the ventromedial prefrontal cortex, an area known to encode the value of options during choices, correlates with the capital gains of potential trades; that the neural measures of realization utility correlate across subjects with their individual tendency to exhibit a disposition effect; and that activity in the ventral striatum, an area known to encode information about changes in the present value of experienced utility, exhibits a positive response when subjects realize capital gains. These results provide support for the realization utility model and, more generally, demonstrate how neural data can be helpful in testing models of investor behavior.

  6. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer.

    Science.gov (United States)

    Hao, Yanni; Wolfram, Verena; Cook, Jennifer

    2016-01-01

    Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases), online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center), and web sites of health technology assessment (HTA) bodies. Publications were selected based on the search strategy and the overall study objectives. A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D). Other utility measures, such as the direct methods standard gamble (SG), time trade-off (TTO), and visual analog scale (VAS), were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL) instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Core 30 (EORTC QLQ-C30), European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Breast Cancer 23 (EORTC QLQ-BR23), Functional Assessment of Cancer Therapy - General questionnaire (FACT-G), and Utility-Based Questionnaire-Cancer (UBQ-C); most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex, income, and education, as well as disease progression, choice of utility elicitation method, and country settings, were identified within the journal articles. Most

  7. Utilizing Data Mining for Predictive Modeling of Colorectal Cancer using Electronic Medical Records

    NARCIS (Netherlands)

    Hoogendoorn, M.; Moons, L.G.; Numans, M.E.; Sips, R.J.

    2014-01-01

    Colorectal cancer (CRC) is a relatively common cause of death around the globe. Predictive models for the development of CRC could be highly valuable and could facilitate an early diagnosis and increased survival rates. Currently available predictive models are improving, but do not fully utilize

  8. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    International Nuclear Information System (INIS)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-01-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students

  9. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  10. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  11. Utility Computing: Reality and Beyond

    Science.gov (United States)

    Ivanov, Ivan I.

    Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?

  12. Animal Models Utilized in HTLV-1 Research

    Directory of Open Access Journals (Sweden)

    Amanda R. Panfil

    2013-01-01

    Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  13. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    Science.gov (United States)

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    OpenAIRE

    Kuspinar, Ayse; Mayo, Nancy E

    2013-01-01

    Purpose The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the cha...

  15. Experimental and numerical investigation of the flow measurement method utilized in the steam generator of HTR-PM

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shiming; Ren, Cheng; Sun, Yangfei [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China); Tu, Jiyuan [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China); School of Aerospace, Mechanical & Manufacturing Engineering, RMIT University, Melbourne, VIC 3083 (Australia); Yang, Xingtuan, E-mail: yangxt107@sina.com [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China)

    2016-08-15

    Highlights: • The flow confluence process in the steam generator is very important for HTR-PM. • The complicated flow in the unique pipeline configuration is studied by both of experimental and numerical method. • The pressure uniformity at the bottom of the model was tested to evaluate the accuracy of the experimental results. • Flow separation and the secondary flow is described for explaining the nonuniformity of the flow distribution. - Abstract: The helium flow measurement method is very important for the design of HTR-PM. Water experiments and numerical simulation with a 1/5 scaled model are conducted to investigate the flow measurement method utilized in the steam generator of HTR-PM. Pressure information at specific location of the 90° elbows with the diameter of 46.75 mm and radius ratio of 1.5 is measured to evaluate the flow rate in the riser-pipes. Pressure uniformity at the bottom of the experimental apparatus is tested to evaluate the influence of the equipment error on the final experimental results. Numerical results obtained by using the realizable k–ε model are compared with the experimental data. The results reveal that flow oscillation does not occur in the confluence system. For every single riser-pipe, the flow is stable despite the nonuniformity of the flow distribution. The average flow rates of the two pipe series show good repeatability regardless of the increases and decreases of the average velocity. In the header box, the flows out of the riser-pipes encounter with each other and finally distort the pressure distribution and the nonuniformity of the flow distribution becomes more significant along with the increasing Reynolds number.

  16. Animal models of GM2 gangliosidosis: utility and limitations

    Science.gov (United States)

    Lawson, Cheryl A; Martin, Douglas R

    2016-01-01

    GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. PMID:27499644

  17. Modeling the development and utilization of bioenergy and exploring the environmental economic benefits

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • A complete bioenergy flow is schemed to industrialize bioenergy utilization. • An input–output optimization simulation model is developed. • Energy supply and demand and bioenergy industries’ development are optimized. • Carbon tax and subsidies are endogenously derived by the model. • Environmental economic benefits of bioenergy utilization are explored dynamically. - Abstract: This paper outlines a complete bioenergy flow incorporating bioresource procurement, feedstock supply, conversion technologies and energy consumption to industrialize the development and utilization of bioenergy. An input–output optimization simulation model is developed to introduce bioenergy industries into the regional socioeconomy and energy production and consumption system and dynamically explore the economic, energy and environmental benefits. 16-term simulation from 2010 to 2025 is performed in scenarios preset based on bioenergy industries, carbon tax-subsidization policy and distinct levels of greenhouse gas emission constraints. An empirical study is conducted to validate and apply the model. In the optimal scenario, both industrial development and energy supply and demand are optimized contributing to a 8.41% average gross regional product growth rate and a 39.9% reduction in accumulative greenhouse gas emission compared with the base scenario. By 2025 the consumption ratio of bioenergy in total primary energy could be increased from 0.5% to 8.2%. Energy self-sufficiency rate could be increased from 57.7% to 77.9%. A dynamic carbon tax rate and the extent to which bioenergy industrial development could be promoted are also elaborated. Regional economic development and greenhouse gas mitigation can be potentially promoted simultaneously by bioenergy utilization and a proper greenhouse gas emission constraint. The methodology presented is capable of introducing new industries or policies related to energy planning and detecting the best tradeoffs of

  18. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  19. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  20. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Souvik [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Nguyen, Thuy-Ai D. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Gan, Lu [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Jones, Anne K. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA

    2015-01-01

    Peptide based models for [FeFe]-hydrogenase were synthesized utilizing unnatural phosphine-amino acids and their electrocatalytic properties were investigated in mixed aqueous-organic solvents.

  1. Neutronics model of the bulk shielding reactor (BSR): validation by comparison of calculations with the experimental measurements

    International Nuclear Information System (INIS)

    Johnson, J.O.; Miller, L.F.; Kam, F.B.K.

    1981-05-01

    A neutronics model for the Oak Ridge National Laboratory Bulk Shielding Reactor (ORNL-SAR) was developed and verified by experimental measurements. A cross-section library was generated from the 218 group Master Library using the AMPX Block Code system. A series of one-, two-, and three-dimensional neutronics calculations were performed utilizing both transport and diffusion theory. Spectral comparison was made with 58 Ni(n,p) reaction. The results of the comparison between the calculational model and other experimental measurements showed agreement within 10% and therefore the model was determined to be adequate for calculating the neutron fluence for future irradiation experiments in the ORNL-BSR

  2. Neutronics model of the bulk shielding reactor (BSR): validation by comparison of calculations with the experimental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J.O.; Miller, L.F.; Kam, F.B.K.

    1981-05-01

    A neutronics model for the Oak Ridge National Laboratory Bulk Shielding Reactor (ORNL-SAR) was developed and verified by experimental measurements. A cross-section library was generated from the 218 group Master Library using the AMPX Block Code system. A series of one-, two-, and three-dimensional neutronics calculations were performed utilizing both transport and diffusion theory. Spectral comparison was made with /sup 58/Ni(n,p) reaction. The results of the comparison between the calculational model and other experimental measurements showed agreement within 10% and therefore the model was determined to be adequate for calculating the neutron fluence for future irradiation experiments in the ORNL-BSR.

  3. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  4. Measuring public understanding on Tenaga Nasional Berhad (TNB) electricity bills using ordered probit model

    Science.gov (United States)

    Zainudin, WNRA; Ramli, NA

    2017-09-01

    In 2016, Tenaga Nasional Berhad (TNB) had introduced an upgrade in its Billing and Customer Relationship Management (BCRM) as part of its long-term initiative to provide its customers with greater access to billing information. This includes information on real and suggested power consumption by the customers and further details in their billing charges. This information is useful to help TNB customers to gain better understanding on their electricity usage patterns and items involved in their billing charges. Up to date, there are not many studies done to measure public understanding on current electricity bills and whether this understanding could contribute towards positive impacts. The purpose of this paper is to measure public understanding on current TNB electricity bills and whether their satisfaction towards energy-related services, electricity utility services, and their awareness on the amount of electricity consumed by various appliances and equipment in their home could improve this understanding on the electricity bills. Both qualitative and quantitative research methods are used to achieve these objectives. A total of 160 respondents from local universities in Malaysia participated in a survey used to collect relevant information. Using Ordered Probit model, this paper finds respondents that are highly satisfied with the electricity utility services tend to understand their electricity bills better. The electric utility services include management of electricity bills and the information obtained from utility or non-utility supplier to help consumers manage their energy usage or bills. Based on the results, this paper concludes that the probability to understand the components in the monthly electricity bill increases as respondents are more satisfied with their electric utility services and are more capable to value the energy-related services.

  5. Estimating the accuracy of optic nerve sheath diameter measurement using a pocket-sized, handheld ultrasound on a simulation model.

    Science.gov (United States)

    Johnson, Garrett G R J; Zeiler, Frederick A; Unger, Bertram; Hansen, Gregory; Karakitsos, Dimitrios; Gillman, Lawrence M

    2016-12-01

    Ultrasound measurement of optic nerve sheath diameter (ONSD) appears to be a promising, rapid, non-invasive bedside tool for identification of elevated intra-cranial pressure. With improvements in ultrasound technology, machines are becoming smaller; however, it is unclear if these ultra-portable handheld units have the resolution to make these measurements precisely. In this study, we estimate the accuracy of ONSD measurement in a pocket-sized ultrasound unit. Utilizing a locally developed, previously validated model of the eye, ONSD was measured by two expert observers, three times with two machines and on five models with different optic nerve sheath sizes. A pocket ultrasound (Vscan, GE Healthcare) and a standard portable ultrasound (M-Turbo, SonoSite) were used to measure the models. Data was analyzed by Bland-Altman plot and intra-class correlation coefficient (ICC). The ICC between raters for the SonoSite was 0.878, and for the Vscan was 0.826. The between-machine agreement ICC was 0.752. Bland-Altman agreement analysis between the two ultrasound methods showed an even spread across the range of sheath sizes, and that the Vscan tended to read on average 0.33 mm higher than the SonoSite for each measurement, with a standard deviation of 0.65 mm. Accurate ONSD measurement may be possible utilizing pocket-sized, handheld ultrasound devices despite their small screen size, lower resolution, and lower probe frequencies. Further study in human subjects is warranted for all newer handheld ultrasound models as they become available on the market.

  6. Modeling of reactive chemical transport of leachates from a utility fly-ash disposal site

    International Nuclear Information System (INIS)

    Apps, J.A.; Zhu, M.; Kitanidis, P.K.; Freyberg, D.L.; Ronan, A.D.; Itakagi, S.

    1991-04-01

    Fly ash from fossil-fuel power plants is commonly slurried and pumped to disposal sites. The utility industry is interested in finding out whether any hazardous constituents might leach from the accumulated fly ash and contaminate ground and surface waters. To evaluate the significance of this problem, a representative site was selected for modeling. FASTCHEM, a computer code developed for the Electric Power Research Institute, was utilized for the simulation of the transport and fate of the fly-ash leachate. The chemical evolution of the leachate was modeled as it migrated along streamtubes defined by the flow model. The modeling predicts that most of the leachate seeps through the dam confining the ash pond. With the exception of ferrous, manganous, sulfate and small amounts of nickel ions, all other dissolved constituents are predicted to discharge at environmentally acceptable concentrations

  7. Tariff rebalancing and price structure in privatised utilities

    International Nuclear Information System (INIS)

    Weyman-Jones, T.; Burns, P.

    1996-01-01

    The document contains the end of award report on research into re-balancing and price structure in privatised utilities, funded by the Economic and Social Science Research Council (ESRC). Ramsey pricing ideas in United Kingdom utilities were modelled under different forms of regulation and cost/price relationships measured. Alternative forms of regulation that permit Ramsey pricing were also evaluated. Option price theory is shown to be central to an understanding of incentive mechanisms and their relationship to regulatory options. (UK)

  8. Viscosity estimation utilizing flow velocity field measurements in a rotating magnetized plasma

    International Nuclear Information System (INIS)

    Yoshimura, Shinji; Tanaka, Masayoshi Y.

    2008-01-01

    The importance of viscosity in determining plasma flow structures has been widely recognized. In laboratory plasmas, however, viscosity measurements have been seldom performed so far. In this paper we present and discuss an estimation method of effective plasma kinematic viscosity utilizing flow velocity field measurements. Imposing steady and axisymmetric conditions, we derive the expression for radial flow velocity from the azimuthal component of the ion fluid equation. The expression contains kinematic viscosity, vorticity of azimuthal rotation and its derivative, collision frequency, azimuthal flow velocity and ion cyclotron frequency. Therefore all quantities except the viscosity are given provided that the flow field can be measured. We applied this method to a rotating magnetized argon plasma produced by the Hyper-I device. The flow velocity field measurements were carried out using a directional Langmuir probe installed in a tilting motor drive unit. The inward ion flow in radial direction, which is not driven in collisionless inviscid plasmas, was clearly observed. As a result, we found the anomalous viscosity, the value of which is two orders of magnitude larger than the classical one. (author)

  9. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    Science.gov (United States)

    Kuspinar, Ayse; Mayo, Nancy E

    2013-04-25

    The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the challenge of using such measures in people with a specific health condition, such as MS, is that they may not capture all of the domains that are impacted upon by the condition. If important domains are missing from the generic measures, the value derived will be higher than the real impact creating invalid comparisons across interventions and populations. Therefore, the objective of this study is to estimate the extent to which generic utility measures capture important domains that are affected by MS. The available study population consisted of men and women who had been registered after 1994 in three participating MS clinics in Greater Montreal, Quebec, Canada. Subjects were first interviewed on an individualized measure of quality of life (QOL) called the Patient Generated Index (PGI). The domains identified with the PGI were then classified and grouped together using the World Health Organization's International Classification of Functioning, Disability and Health (ICF), and mapped onto the HUI2, HUI3, EQ-5D and SF-6D. A total of 185 persons with MS were interviewed on the PGI. The sample was relatively young (mean age 43) and predominantly female. Both men and women had mild disability with a median Expanded Disability Status Scale (EDSS) score of 2. The top 10 domains that patients identified to be the most affected by their MS were, work (62%), fatigue (48%), sports (39%), social life (28%), relationships (23%), walking/mobility (22%), cognition (21%), balance (14%), housework (12

  10. Computational fluid dynamic simulations of coal-fired utility boilers: An engineering tool

    Energy Technology Data Exchange (ETDEWEB)

    Efim Korytnyi; Roman Saveliev; Miron Perelman; Boris Chudnovsky; Ezra Bar-Ziv [Ben-Gurion University of the Negev, Beer-Sheva (Israel)

    2009-01-15

    The objective of this study was to develop an engineering tool by which the combustion behavior of coals in coal-fired utility boilers can be predicted. We presented in this paper that computational fluid dynamic (CFD) codes can successfully predict performance of - and emission from - full-scale pulverized-coal utility boilers of various types, provided that the model parameters required for the simulation are properly chosen and validated. For that purpose we developed a methodology combining measurements in a 50 kW pilot-scale test facility with CFD simulations using the same CFD code configured for both test and full-scale furnaces. In this method model parameters of the coal processes are extracted and validated. This paper presents the importance of the validation of the model parameters which are used in CFD codes. Our results show very good fit of CFD simulations with various parameters measured in a test furnace and several types of utility boilers. The results of this study demonstrate the viability of the present methodology as an effective tool for optimization coal burning in full-scale utility boilers. 41 refs., 9 figs., 3 tabs.

  11. Measurement of Online Student Engagement: Utilization of Continuous Online Student Behavior Indicators as Items in a Partial Credit Rasch Model

    Science.gov (United States)

    Anderson, Elizabeth

    2017-01-01

    Student engagement has been shown to be essential to the development of research-based best practices for K-12 education. It has been defined and measured in numerous ways. The purpose of this research study was to develop a measure of online student engagement for grades 3 through 8 using a partial credit Rasch model and validate the measure…

  12. A Study of How the Watts-Strogatz Model Relates to an Economic System’s Utility

    Directory of Open Access Journals (Sweden)

    Lunhan Luo

    2014-01-01

    Full Text Available Watts-Strogatz model is a main mechanism to construct the small-world networks. It is widely used in the simulations of small-world featured systems including economic system. Formally, the model contains a parameters set including three variables representing group size, number of neighbors, and rewiring probability. This paper discusses how the parameters set relates to the economic system performance which is utility growth rate. In conclusion, it is found that, regardless of the group size and rewiring probability, 2 to 18 neighbors can help the economic system reach the highest utility growth rate. Furthermore, given the range of neighbors and group size of a Watts-Strogatz model based system, the range of its edges can be calculated too. By examining the containment relationship between that range and the edge number of an actual equal-size economic system, we could know whether the system structure has redundant edges or can achieve the highest utility growth ratio.

  13. Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-09-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was

  14. Measurement and modeling of magnetic hysteresis under field and stress application in iron–gallium alloys

    International Nuclear Information System (INIS)

    Evans, Phillip G.; Dapino, Marcelo J.

    2013-01-01

    Measurements are performed to characterize the hysteresis in magnetomechanical coupling of iron–gallium (Galfenol) alloys. Magnetization and strain of production and research grade Galfenol are measured under applied stress at constant field, applied field at constant stress, and alternately applied field and stress. A high degree of reversibility in the magnetomechanical coupling is demonstrated by comparing a series of applied field at constant stress measurements with a single applied stress at constant field measurement. Accommodation is not evident and magnetic hysteresis for applied field and stress is shown to be coupled. A thermodynamic model is formulated for 3-D magnetization and strain. It employs a stress, field, and direction dependent hysteron that has an instantaneous loss mechanism, similar to Coulomb-friction or Preisach-type models. Stochastic homogenization is utilized to account for the smoothing effect that material inhomogeneities have on bulk processes. - Highlights: ► We conduct coupled experiments and develop nonlinear thermodynamic models for magnetostrictive iron–gallium (Galfenol) alloys. ► The measurements show unexpected kinematic reversibility in the magnetomechanical coupling. ► This is in contrast with the magnetomechanical coupling in steel which is both thermodynamically and kinematically irreversible. ► The model accurately describes the measurements and provides a framework for understanding hysteresis in ferromagnetic materials which exhibit kinematically reversible magnetomechanical coupling.

  15. Utilization of Multimedia Laboratory: An Acceptance Analysis using TAM

    Science.gov (United States)

    Modeong, M.; Palilingan, V. R.

    2018-02-01

    Multimedia is often utilized by teachers to present a learning materials. Learning that delivered by multimedia enables people to understand the information of up to 60% of the learning in general. To applying the creative learning to the classroom, multimedia presentation needs a laboratory as a space that provides multimedia needs. This study aims to reveal the level of student acceptance on the multimedia laboratories, by explaining the direct and indirect effect of internal support and technology infrastructure. Technology Acceptance Model (TAM) is used as the basis of measurement on this research, through the perception of usefulness, ease of use, and the intention, it’s recognized capable of predicting user acceptance about technology. This study used the quantitative method. The data analysis using path analysis that focuses on trimming models, it’s performed to improve the model of path analysis structure by removing exogenous variables that have insignificant path coefficients. The result stated that Internal Support and Technology Infrastructure are well mediated by TAM variables to measure the level of technology acceptance. The implications suggest that TAM can measure the success of multimedia laboratory utilization in Faculty of Engineering UNIMA.

  16. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS

    Science.gov (United States)

    The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...

  18. Measures to prevent global warming, and NEDO's energy-saving model projects; Chikyu ondanka boshi taisaku to NEDO sho energy model jigyo

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Described herein are United Nations Framework Convention on Climate Change and the world AIJ (Activities Implemented Jointly) projects, and the Japan's measures and NEDO's energy-saving model projects therefor. NEDO has been inviting the public to join the contests for the projects to be implemented as part of the AIJ Japan program for the first time since April 1996. A total of 11 projects were adopted in July, including the model project for recovering heat from red-hot coke with inert gas, to be implemented by NEDO in China. After the first invitation, an individual proposal will be accepted and examined for which no time limit is set. The NEDO's model projects approved so far include demonstration studies on facilities for effective utilization of paper-making sludge, waste heat recovery at steel furnaces, energy-saving at electric furnaces for alloys, effective utilization of waste heat at garbage incinerators, and power saving at cement kilns. (NEDO)

  19. Emergency Preparedness Education for Nurses: Core Competency Familiarity Measured Utilizing an Adapted Emergency Preparedness Information Questionnaire.

    Science.gov (United States)

    Georgino, Madeline M; Kress, Terri; Alexander, Sheila; Beach, Michael

    2015-01-01

    The purpose of this project was to measure trauma nurse improvement in familiarity with emergency preparedness and disaster response core competencies as originally defined by the Emergency Preparedness Information Questionnaire after a focused educational program. An adapted version of the Emergency Preparedness Information Questionnaire was utilized to measure familiarity of nurses with core competencies pertinent to first responder capabilities. This project utilized a pre- and postsurvey descriptive design and integrated education sessions into the preexisting, mandatory "Trauma Nurse Course" at large, level I trauma center. A total of 63 nurses completed the intervention during May and September 2014 sessions. Overall, all 8 competencies demonstrated significant (P < .001; 98% confidence interval) improvements in familiarity. In conclusion, this pilot quality improvement project demonstrated a unique approach to educating nurses to be more ready and comfortable when treating victims of a disaster.

  20. The effect of Employee Assistance Programs use on healthcare utilization.

    OpenAIRE

    Zarkin, G A; Bray, J W; Qi, J

    2000-01-01

    OBJECTIVE: To estimate the effect of Employee Assistance Program (EAP) use on healthcare utilization as measured by health claims. DATA SOURCES: A unique data set that combines individual-level information on EAP utilization, demographic information, and health insurance claims from 1991 to 1995 for all employees of a large midwestern employer. STUDY DESIGN: Using "fixed-effect" econometric models that control for unobserved differences between individuals' propensities to use healthcare reso...

  1. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  2. Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies

    Directory of Open Access Journals (Sweden)

    José Pinto Casquilho

    2017-02-01

    Full Text Available The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.

  3. Utilization of coincidence criteria in absolute length measurements by optical interferometry in vacuum and air

    International Nuclear Information System (INIS)

    Schödel, R

    2015-01-01

    Traceability of length measurements to the international system of units (SI) can be realized by using optical interferometry making use of well-known frequencies of monochromatic light sources mentioned in the Mise en Pratique for the realization of the metre. At some national metrology institutes, such as Physikalisch-Technische Bundesanstalt (PTB) in Germany, the absolute length of prismatic bodies (e.g. gauge blocks) is realized by so-called gauge-block interference comparators. At PTB, a number of such imaging phase-stepping interference comparators exist, including specialized vacuum interference comparators, each equipped with three highly stabilized laser light sources. The length of a material measure is expressed as a multiple of each wavelength. The large number of integer interference orders can be extracted by the method of exact fractions in which the coincidence of the lengths resulting from the different wavelengths is utilized as a criterion. The unambiguous extraction of the integer interference orders is an essential prerequisite for correct length measurements. This paper critically discusses coincidence criteria and their validity for three modes of absolute length measurements: 1) measurements under vacuum in which the wavelengths can be identified with the vacuum wavelengths, 2) measurements under air in which the air refractive index is obtained from environmental parameters using an empirical equation, and 3) measurements under air in which the air refractive index is obtained interferometrically by utilizing a vacuum cell placed along the measurement pathway. For case 3), which corresponds to PTB’s Kösters-Comparator for long gauge blocks, the unambiguous determination of integer interference orders related to the air refractive index could be improved by about a factor of ten when an ‘overall dispersion value,’ suggested in this paper, is used as coincidence criterion. (paper)

  4. A comparison of emission calculations using different modeled indicators with 1-year online measurements.

    Science.gov (United States)

    Lengers, Bernd; Schiefler, Inga; Büscher, Wolfgang

    2013-12-01

    The overall measurement of farm level greenhouse gas (GHG) emissions in dairy production is not feasible, from either an engineering or administrative point of view. Instead, computational model systems are used to generate emission inventories, demanding a validation by measurement data. This paper tests the GHG calculation of the dairy farm-level optimization model DAIRYDYN, including methane (CH₄) from enteric fermentation and managed manure. The model involves four emission calculation procedures (indicators), differing in the aggregation level of relevant input variables. The corresponding emission factors used by the indicators range from default per cow (activity level) emissions up to emission factors based on feed intake, manure amount, and milk production intensity. For validation of the CH₄ accounting of the model, 1-year CH₄ measurements of an experimental free-stall dairy farm in Germany are compared to model simulation results. An advantage of this interdisciplinary study is given by the correspondence of the model parameterization and simulation horizon with the experimental farm's characteristics and measurement period. The results clarify that modeled emission inventories (2,898, 4,637, 4,247, and 3,600 kg CO₂-eq. cow(-1) year(-1)) lead to more or less good approximations of online measurements (average 3,845 kg CO₂-eq. cow(-1) year(-1) (±275 owing to manure management)) depending on the indicator utilized. The more farm-specific characteristics are used by the GHG indicator; the lower is the bias of the modeled emissions. Results underline that an accurate emission calculation procedure should capture differences in energy intake, owing to milk production intensity as well as manure storage time. Despite the differences between indicator estimates, the deviation of modeled GHGs using detailed indicators in DAIRYDYN from on-farm measurements is relatively low (between -6.4% and 10.5%), compared with findings from the literature.

  5. Spectroscopic measurements of soybeans used to parameterize physiological traits in the AgroIBIS ecosystem model

    Science.gov (United States)

    Singh, A.; Serbin, S.; Kucharik, C. J.; Townsend, P. A.

    2014-12-01

    Ecosystem models such AgroIBIS require detailed parameterizations of numerous vegetation traits related to leaf structure, biochemistry and photosynthetic capacity to properly assess plant carbon assimilation and yield response to environmental variability. In general, these traits are estimated from a limited number of field measurements or sourced from the literature, but rarely is the full observed range of variability in these traits utilized in modeling activities. In addition, pathogens and pests, such as the exotic soybean aphid (Aphis glycines), which affects photosynthetic pathways in soybean plants by feeding on phloem and sap, can potentially impact plant productivity and yields. Capturing plant responses to pest pressure in conjunction with environmental variability is of considerable interest to managers and the scientific community alike. In this research, we employed full-range (400-2500 nm) field and laboratory spectroscopy to rapidly characterize the leaf biochemical and physiological traits, namely foliar nitrogen, specific leaf area (SLA) and the maximum rate of RuBP carboxylation by the enzyme RuBisCo (Vcmax) in soybean plants, which experienced a broad range of environmental conditions and soybean aphid pressures. We utilized near-surface spectroscopic remote sensing measurements as a means to capture the spatial and temporal patterns of aphid impacts across broad aphid pressure levels. In addition, we used the spectroscopic data to generate a much larger dataset of key model parameters required by AgroIBIS than would be possible through traditional measurements of biochemistry and leaf-level gas exchange. The use of spectroscopic retrievals of soybean traits allowed us to better characterize the variability of plant responses associated with aphid pressure to more accurately model the likely impacts of soybean aphid on soybeans. Our next steps include the coupling of the information derived from our spectral measurements with the Agro

  6. Developing a Measure of Therapist Adherence to Contingency Management: An Application of the Many-Facet Rasch Model

    Science.gov (United States)

    Chapman, Jason E.; Sheidow, Ashli J.; Henggeler, Scott W.; Halliday-Boykins, Colleen A.; Cunningham, Phillippe B.

    2008-01-01

    A unique application of the Many-Facet Rasch Model (MFRM) is introduced as the preferred method for evaluating the psychometric properties of a measure of therapist adherence to Contingency Management (CM) treatment of adolescent substance use. The utility of psychometric methods based in Classical Test Theory was limited by complexities of the…

  7. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    Energy Technology Data Exchange (ETDEWEB)

    Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my [Petroleum Geosciences Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Ridha, S. [Petroleum Engineering Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Nurhandoko, B. E. B. [Wave Inversion and Subsurface Fluid Imaging Research Laboratory (WISFIR), Dept. of Physics, Institute of Technology Bandung, Bandung, Indonesia and Rock Fluid Imaging Lab, Bandung (Indonesia)

    2014-09-25

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic

  8. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    International Nuclear Information System (INIS)

    Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.

    2014-01-01

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave

  9. Modeling and optimization of a utility system containing multiple extractions steam turbines

    International Nuclear Information System (INIS)

    Luo, Xianglong; Zhang, Bingjian; Chen, Ying; Mo, Songping

    2011-01-01

    Complex turbines with multiple controlled and/or uncontrolled extractions are popularly used in the processing industry and cogeneration plants to provide steam of different levels, electric power, and driving power. To characterize thermodynamic behavior under varying conditions, nonlinear mathematical models are developed based on energy balance, thermodynamic principles, and semi-empirical equations. First, the complex turbine is decomposed into several simple turbines from the controlled extraction stages and modeled in series. THM (The turbine hardware model) developing concept is applied to predict the isentropic efficiency of the decomposed simple turbines. Stodola's formulation is also used to simulate the uncontrolled extraction steam parameters. The thermodynamic properties of steam and water are regressed through linearization or piece-wise linearization. Second, comparison between the simulated results using the proposed model and the data in the working condition diagram provided by the manufacturer is conducted over a wide range of operations. The simulation results yield small deviation from the data in the working condition diagram where the maximum modeling error is 0.87% among the compared seven operation conditions. Last, the optimization model of a utility system containing multiple extraction turbines is established and a detailed case is analyzed. Compared with the conventional operation strategy, a maximum of 5.47% of the total operation cost is saved using the proposed optimization model. -- Highlights: → We develop a complete simulation model for steam turbine with multiple extractions. → We test the simulation model using the performance data of commercial turbines. → The simulation error of electric power generation is no more than 0.87%. → We establish a utility system operational optimization model. → The optimal industrial operation scheme featured with 5.47% of cost saving.

  10. Utility Function and Optimum Consumption in the models with Habit Formation and Catching up with the Joneses

    OpenAIRE

    Naryshkin, Roman; Davison, Matt

    2009-01-01

    This paper analyzes popular time-nonseparable utility functions that describe "habit formation" consumer preferences comparing current consumption with the time averaged past consumption of the same individual and "catching up with the Joneses" (CuJ) models comparing individual consumption with a cross-sectional average consumption level. Few of these models give reasonable optimum consumption time series. We introduce theoretically justified utility specifications leading to a plausible cons...

  11. Why environmental and resource economists should care about non-expected utility models

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, W. Douglass; Woodward, Richard T. [Department of Agricultural Economics, Texas A and M University (United States)

    2008-01-15

    Experimental and theoretical analysis has shown that the conventional expected utility (EU) and subjective expected utility (SEU) models, which are linear in probabilities, have serious limitations in certain situations. We argue here that these limitations are often highly relevant to the work that environmental and natural resource economists do. We discuss some of the experimental evidence and alternatives to the SEU. We consider the theory used, the problems studied, and the methods employed by resource economists. Finally, we highlight some recent work that has begun to use some of the alternatives to the EU and SEU frameworks and discuss areas where much future work is needed. (author)

  12. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  13. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into `active` drill hole methods, and `passive` indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial `leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and

  14. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    International Nuclear Information System (INIS)

    Kukkonen, I.; Suppala, I.

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into 'active' drill hole methods, and 'passive' indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial 'leak' of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm -1 , temperature recording with 5-7 sensors placed along the probe, and

  15. ASPEN+ and economic modeling of equine waste utilization for localized hot water heating via fast pyrolysis

    Science.gov (United States)

    ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...

  16. Measurement and modeling of room temperature co-deformation in WC-10 wt.%

    Energy Technology Data Exchange (ETDEWEB)

    Livescu, V. [MST-8/LANSCE, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)]. E-mail: vlivescu@lanl.gov; Clausen, B. [MST-8/LANSCE, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Paggett, J.W. [Department of Mechanical and Aerospace Engineering, University of Missouri, Columbia, MO 65211 (United States); Krawitz, A.D. [Department of Mechanical and Aerospace Engineering, University of Missouri, Columbia, MO 65211 (United States); Drake, E.F. [REEDHycalogTM/Grant Prideco, Houston, TX 77252 (United States); Bourke, M.A.M. [MST-8/LANSCE, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2005-06-15

    In situ neutron diffraction measurements were performed on a tungsten carbide (WC)-10 wt.% cobalt (Co) cemented carbide composite subjected to compressive loading. The sample was subjected to consecutive load/unload cycles to -500, -1000, -2000 and -2100 MPa. Thermal residual stresses measured before loading reflected large hydrostatic tensile stresses in the binder phase and compressive stresses in the carbide phase. The carbide phase behaved elastically at all but the highest load levels, whereas plasticity was present in the binder phase from values of applied stress as low as -500 MPa. A finite element simulation utilizing an interpenetrating microstructure model showed remarkable agreement with the complex mean phase strain response during the loading cycles despite its under-prediction of thermal residual strains.

  17. Software Tools for Emittance Measurement and Matching for 12 GeV CEBAF

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Dennis L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-05-01

    This paper discusses model-driven setup of the Continuous Electron Beam Accelerator Facility (CEBAF) for the 12GeV era, focusing on qsUtility. qsUtility is a set of software tools created to perform emittance measurements, analyze those measurements, and compute optics corrections based upon the measurements.qsUtility was developed as a toolset to facilitate reducing machine configuration time and reproducibility by way of an accurate accelerator model, and to provide Operations staff with tools to measure and correct machine optics with little or no assistance from optics experts.

  18. Measures of metacognition on signal-detection theoretic models.

    Science.gov (United States)

    Barrett, Adam B; Dienes, Zoltan; Seth, Anil K

    2013-12-01

    Analyzing metacognition, specifically knowledge of accuracy of internal perceptual, memorial, or other knowledge states, is vital for many strands of psychology, including determining the accuracy of feelings of knowing and discriminating conscious from unconscious cognition. Quantifying metacognitive sensitivity is however more challenging than quantifying basic stimulus sensitivity. Under popular signal-detection theory (SDT) models for stimulus classification tasks, approaches based on Type II receiver-operating characteristic (ROC) curves or Type II d-prime risk confounding metacognition with response biases in either the Type I (classification) or Type II (metacognitive) tasks. A new approach introduces meta-d': The Type I d-prime that would have led to the observed Type II data had the subject used all the Type I information. Here, we (a) further establish the inconsistency of the Type II d-prime and ROC approaches with new explicit analyses of the standard SDT model and (b) analyze, for the first time, the behavior of meta-d' under nontrivial scenarios, such as when metacognitive judgments utilize enhanced or degraded versions of the Type I evidence. Analytically, meta-d' values typically reflect the underlying model well and are stable under changes in decision criteria; however, in relatively extreme cases, meta-d' can become unstable. We explore bias and variance of in-sample measurements of meta-d' and supply MATLAB code for estimation in general cases. Our results support meta-d' as a useful measure of metacognition and provide rigorous methodology for its application. Our recommendations are useful for any researchers interested in assessing metacognitive accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  20. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  1. Utility-preserving anonymization for health data publishing.

    Science.gov (United States)

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  2. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  3. A life-cycle model approach to multimedia waste reduction measuring performance for environmental cleanup projects

    International Nuclear Information System (INIS)

    Phifer, B.E. Jr.; George, S.M.

    1993-01-01

    The Martin Marietta Energy Systems, Inc. (Energy Systems), Environmental Restoration (ER) Program adopted a Pollution Prevention Program in March 1991. The program's mission is to minimize waste and prevent pollution in remedial investigations (RIs), feasibility studies, decontamination and decommissioning, and surveillance and maintenance site program activities. Mission success will result in volume and/or toxicity reduction of generated waste. The ER Program waste generation rates are projected to steadily increase through the year 2005 for all waste categories. Standard production units utilized to measure waste minimization apply to production/manufacturing facilities. Since ER inherited contaminated waste from previous production processes, no historical production data can be applied. Therefore, a more accurate measure for pollution prevention was identified as a need for the ER Program. The Energy Systems ER Program adopted a life-cycle model approach and implemented the concept of numerically scoring their waste generators to measure the effectiveness of pollution prevention/waste minimization programs and elected to develop a numerical scoring system (NSS) to accomplish these measurements. The prototype NSS, a computerized, user-friendly information management database system, was designed to be utilized in each phase of the ER Program. The NSS was designed to measure a generator's success in incorporating pollution prevention in their work plans and reducing investigation-derived waste (IDW) during RIs. Energy Systems is producing a fully developed NSS and actually scoring the generators of IDW at six ER Program sites. Once RI waste generators are scored utilizing the NSS, the numerical scores are distributed into six performance categories: training, self-assessment, field implementation, documentation, technology transfer, and planning

  4. Dipole field measurement technique utilizing the Faraday rotation effect in polarization preserving optical fibers

    International Nuclear Information System (INIS)

    Haddock, C.; Tong, M.Y.M.

    1989-10-01

    TRIUMF is presently in the project definition stage of its proposed KAON factory. The facility will require approximately 300 dipole magnets. The rapid measurement of representative parameters of these magnets, in particular effective length, is one of the challenges to be met. As well as the commissioning of a.c magnetic field measurement systems based on established techniques a project is underway to investigate an alternative method utilizing the Faraday Rotation effect in polarization preserving optical fibers. It is shown that a fiber equivalent to a Faraday cell can be constructed by winding a fiber in a such a way that the induced beat length L p is equal to (2n+1) times the bending circumference, with n integer. Background to the subject and preliminary results of the measurements are reported in this paper

  5. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  6. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  7. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  8. Factors Impacting Student Service Utilization at Ontario Colleges: Key Performance Indicators as a Measure of Success: A Niagara College View

    Science.gov (United States)

    Veres, David

    2015-01-01

    Student success in Ontario College is significantly influenced by the utilization of student services. At Niagara College there has been a significant investment in student services as a strategy to support student success. Utilizing existing KPI data, this quantitative research project is aimed at measuring factors that influence both the use of…

  9. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    Science.gov (United States)

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  10. Establishing a coherent and replicable measurement model of the Edinburgh Postnatal Depression Scale.

    Science.gov (United States)

    Martin, Colin R; Redshaw, Maggie

    2018-06-01

    The 10-item Edinburgh Postnatal Depression Scale (EPDS) is an established screening tool for postnatal depression. Inconsistent findings in factor structure and replication difficulties have limited the scope of development of the measure as a multi-dimensional tool. The current investigation sought to robustly determine the underlying factor structure of the EPDS and the replicability and stability of the most plausible model identified. A between-subjects design was used. EPDS data were collected postpartum from two independent cohorts using identical data capture methods. Datasets were examined with confirmatory factor analysis, model invariance testing and systematic evaluation of relational and internal aspects of the measure. Participants were two samples of postpartum women in England assessed at three months (n = 245) and six months (n = 217). The findings showed a three-factor seven-item model of the EPDS offered an excellent fit to the data, and was observed to be replicable in both datasets and invariant as a function of time point of assessment. Some EPDS sub-scale scores were significantly higher at six months. The EPDS is multi-dimensional and a robust measurement model comprises three factors that are replicable. The potential utility of the sub-scale components identified requires further research to identify a role in contemporary screening practice. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Animal models of GM2 gangliosidosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Lawson CA

    2016-07-01

    Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease

  12. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  13. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  14. Schottky barrier height measurements of Cu/Si(001), Ag/Si(001), and Au/Si(001) interfaces utilizing ballistic electron emission microscopy and ballistic hole emission microscopy

    International Nuclear Information System (INIS)

    Balsano, Robert; Matsubayashi, Akitomo; LaBella, Vincent P.

    2013-01-01

    The Schottky barrier heights of both n and p doped Cu/Si(001), Ag/Si(001), and Au/Si(001) diodes were measured using ballistic electron emission microscopy and ballistic hole emission microscopy (BHEM), respectively. Measurements using both forward and reverse ballistic electron emission microscopy (BEEM) and (BHEM) injection conditions were performed. The Schottky barrier heights were found by fitting to a linearization of the power law form of the Bell-Kaiser BEEM model. The sum of the n-type and p-type barrier heights are in good agreement with the band gap of silicon and independent of the metal utilized. The Schottky barrier heights are found to be below the region of best fit for the power law form of the BK model, demonstrating its region of validity

  15. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

    Science.gov (United States)

    Estabrook, Ryne

    2015-03-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

  16. Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event

    Science.gov (United States)

    Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy

    2011-01-01

    This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…

  17. Utility of Social Modeling for Proliferation Assessment - Enhancing a Facility-Level Model for Proliferation Resistance Assessment of a Nuclear Enegry System

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Gastelum, Zoe N.; Olson, Jarrod; Thompson, Sandra E.

    2009-10-26

    The Utility of Social Modeling for Proliferation Assessment project (PL09-UtilSocial) investigates the use of social and cultural information to improve nuclear proliferation assessments, including nonproliferation assessments, Proliferation Resistance (PR) assessments, safeguards assessments, and other related studies. These assessments often use and create technical information about a host State and its posture towards proliferation, the vulnerability of a nuclear energy system (NES) to an undesired event, and the effectiveness of safeguards. This objective of this project is to find and integrate social and technical information by explicitly considering the role of cultural, social, and behavioral factors relevant to proliferation; and to describe and demonstrate if and how social science modeling has utility in proliferation assessment. This report describes a modeling approach and how it might be used to support a location-specific assessment of the PR assessment of a particular NES. The report demonstrates the use of social modeling to enhance an existing assessment process that relies on primarily technical factors. This effort builds on a literature review and preliminary assessment performed as the first stage of the project and compiled in PNNL-18438. [ T his report describes an effort to answer questions about whether it is possible to incorporate social modeling into a PR assessment in such a way that we can determine the effects of social factors on a primarily technical assessment. This report provides: 1. background information about relevant social factors literature; 2. background information about a particular PR assessment approach relevant to this particular demonstration; 3. a discussion of social modeling undertaken to find and characterize social factors that are relevant to the PR assessment of a nuclear facility in a specific location; 4. description of an enhancement concept that integrates social factors into an existing, technically

  18. WE-G-204-02: Utility of a Channelized Hotelling Model Observer Over a Large Range of Angiographic Exposure Levels

    International Nuclear Information System (INIS)

    Fetterly, K; Favazza, C

    2015-01-01

    Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 to 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application

  19. WE-G-204-02: Utility of a Channelized Hotelling Model Observer Over a Large Range of Angiographic Exposure Levels

    Energy Technology Data Exchange (ETDEWEB)

    Fetterly, K; Favazza, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 to 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application.

  20. Clinical utility of the DSM-5 alternative model for borderline personality disorder: Differential diagnostic accuracy of the BFI, SCID-II-PQ, and PID-5.

    Science.gov (United States)

    Fowler, J Christopher; Madan, Alok; Allen, Jon G; Patriquin, Michelle; Sharp, Carla; Oldham, John M; Frueh, B Christopher

    2018-01-01

    With the publication of DSM 5 alternative model for personality disorders it is critical to assess the components of the model against evidence-based models such as the five factor model and the DSM-IV-TR categorical model. This study explored the relative clinical utility of these models in screening for borderline personality disorder (BPD). Receiver operator characteristics and diagnostic efficiency statistics were calculated for three personality measures to ascertain the relative diagnostic efficiency of each measure. A total of 1653 adult inpatients at a specialist psychiatric hospital completed SCID-II interviews. Sample 1 (n=653) completed the SCID-II interviews, SCID-II Questionnaire (SCID-II-PQ) and the Big Five Inventory (BFI), while Sample 2 (n=1,000) completed the SCID-II interviews, Personality Inventory for DSM5 (PID-5) and the BFI. BFI measure evidenced moderate accuracy for two composites: High Neuroticism+ low agreeableness composite (AUC=0.72, SE=0.01, ptrait constellation for diagnosing BPD. Limitations of the study include the single inpatient setting and use of two discrete samples to assess PID-5 and SCID-II-PQ. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Review of Generic Preference-Based Measures for Use in Cost-Effectiveness Models.

    Science.gov (United States)

    Brazier, John; Ara, Roberta; Rowen, Donna; Chevrou-Severac, Helene

    2017-12-01

    Generic preference-based measures (GPBMs) of health are used to obtain the quality adjustment weight required to calculate the quality-adjusted life year in health economic models. GPBMs have been developed to use across different interventions and medical conditions and typically consist of a self-complete patient questionnaire, a health state classification system, and preference weights for all states defined by the classification system. Of the six main GPBMs, the three most frequently used are the Health Utilities Index version 3, the EuroQol 5 dimensions (3 and 5 levels), and the Short Form 6 dimensions. There are considerable differences in GPBMs in terms of the content and size of descriptive systems (i.e. the numbers of dimensions of health and levels of severity within these), the methods of valuation [e.g. time trade-off (TTO), standard gamble (SG)], and the populations (e.g. general population, patients) used to value the health states within the descriptive systems. Although GPBMs are anchored at 1 (full health) and 0 (dead), they produce different health state utility values when completed by the same patient. Considerations when selecting a measure for use in a clinical trial include practicality, reliability, validity and responsiveness. Requirements of reimbursement agencies may impose additional restrictions on suitable measures for use in economic evaluations, such as the valuation technique (TTO, SG) or the source of values (general public vs. patients).

  2. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  3. Solar Measurement and Modeling | Grid Modernization | NREL

    Science.gov (United States)

    Measurement and Modeling Solar Measurement and Modeling NREL supports grid integration studies , industry, government, and academia by disseminating solar resource measurements, models, and best practices have continuously gathered basic solar radiation information, and they now gather high-resolution data

  4. USING RESPIROMETRY TO MEASURE HYDROGEN UTILIZATION IN SULFATE REDUCING BACTERIA IN THE PRESENCE OF COPPER AND ZINC

    Science.gov (United States)

    A respirometric method has been developed to measure hydrogen utilization by sulfate reducing bacteria (SRB). One application of this method has been to test inhibitory metals effects on the SRB culture used in a novel acid mine drainage treatment technology. As a control param...

  5. Utility of Social Modeling in Assessment of a State's Propensity for Nuclear Proliferation

    International Nuclear Information System (INIS)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-01-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  6. Can the measurement of brachial artery flow-mediated dilation be applied to the acute exercise model?

    Directory of Open Access Journals (Sweden)

    Harris Ryan A

    2007-11-01

    Full Text Available Abstract The measurement of flow-mediated dilation using high-resolution ultrasound has been utilized extensively in interventional trials evaluating the salutary effect of drugs and lifestyle modifications (i.e. diet or exercise training on endothelial function; however, until recently researchers have not used flow-mediated dilation to examine the role of a single bout of exercise on vascular function. Utilizing the acute exercise model can be advantageous as it allows for an efficient manipulation of exercise variables (i.e. mode, intensity, duration, etc. and permits greater experimental control of confounding variables. Given that the application of flow-mediated dilation in the acute exercise paradigm is expanding, the purpose of this review is to discuss methodological and physiological factors pertinent to flow-mediated dilation in the context of acute exercise. Although the scientific rationale for evaluating endothelial function in response to acute exercise is sound, few concerns warrant attention when interpreting flow-mediated dilation data following acute exercise. The following questions will be addressed in the present review: Does the measurement of flow-mediated dilation influence subsequent serial measures of flow-mediated dilation? Do we need to account for diurnal variation? Is there an optimal time to measure post-exercise flow-mediated dilation? Is the post-exercise flow-mediated dilation reproducible? How is flow-mediated dilation interpreted considering the hemodynamic and sympathetic changes associated with acute exercise? Can the measurement of endothelial-independent dilation affect the exercise? Evidence exists to support the methodological appropriateness for employing flow-mediated dilation in the acute exercise model; however, further research is warranted to clarify its interpretation following acute exercise.

  7. Numerical comparison of grid pattern diffraction effects through measurement and modeling with OptiScan software

    Science.gov (United States)

    Murray, Ian B.; Densmore, Victor; Bora, Vaibhav; Pieratt, Matthew W.; Hibbard, Douglas L.; Milster, Tom D.

    2011-06-01

    Coatings of various metalized patterns are used for heating and electromagnetic interference (EMI) shielding applications. Previous work has focused on macro differences between different types of grids, and has shown good correlation between measurements and analyses of grid diffraction. To advance this work, we have utilized the University of Arizona's OptiScan software, which has been optimized for this application by using the Babinet Principle. When operating on an appropriate computer system, this algorithm produces results hundreds of times faster than standard Fourier-based methods, and allows realistic cases to be modeled for the first time. By using previously published derivations by Exotic Electro-Optics, we compare diffraction performance of repeating and randomized grid patterns with equivalent sheet resistance using numerical performance metrics. Grid patterns of each type are printed on optical substrates and measured energy is compared against modeled energy.

  8. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    Science.gov (United States)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  9. Utility Function for modeling Group Multicriteria Decision Making problems as games

    OpenAIRE

    Alexandre Bevilacqua Leoneti

    2016-01-01

    To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...

  10. Parametric model measurement: reframing traditional measurement ideas in neuropsychological practice and research.

    Science.gov (United States)

    Brown, Gregory G; Thomas, Michael L; Patt, Virginie

    Neuropsychology is an applied measurement field with its psychometric work primarily built upon classical test theory (CTT). We describe a series of psychometric models to supplement the use of CTT in neuropsychological research and test development. We introduce increasingly complex psychometric models as measurement algebras, which include model parameters that represent abilities and item properties. Within this framework of parametric model measurement (PMM), neuropsychological assessment involves the estimation of model parameters with ability parameter values assuming the role of test 'scores'. Moreover, the traditional notion of measurement error is replaced by the notion of parameter estimation error, and the definition of reliability becomes linked to notions of item and test information. The more complex PMM approaches incorporate into the assessment of neuropsychological performance formal parametric models of behavior validated in the experimental psychology literature, along with item parameters. These PMM approaches endorse the use of experimental manipulations of model parameters to assess a test's construct representation. Strengths and weaknesses of these models are evaluated by their implications for measurement error conditional upon ability level, sensitivity to sample characteristics, computational challenges to parameter estimation, and construct validity. A family of parametric psychometric models can be used to assess latent processes of interest to neuropsychologists. By modeling latent abilities at the item level, psychometric studies in neuropsychology can investigate construct validity and measurement precision within a single framework and contribute to a unification of statistical methods within the framework of generalized latent variable modeling.

  11. Indoor MIMO Channel Measurement and Modeling

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Andersen, Jørgen Bach

    2005-01-01

    Forming accurate models of the multiple input multiple output (MIMO) channel is essential both for simulation as well as understanding of the basic properties of the channel. This paper investigates different known models using measurements obtained with a 16x32 MIMO channel sounder for the 5.8GHz...... band. The measurements were carried out in various indoor scenarios including both temporal and spatial aspects of channel changes. The models considered include the so-called Kronecker model, a model proposed by Weichselberger et. al., and a model involving the full covariance matrix, the most...

  12. High utility-itemset mining and privacy-preserving utility mining

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2016-03-01

    Full Text Available In recent decades, high-utility itemset mining (HUIM has emerging a critical research topic since the quantity and profit factors are both concerned to mine the high-utility itemsets (HUIs. Generally, data mining is commonly used to discover interesting and useful knowledge from massive data. It may, however, lead to privacy threats if private or secure information (e.g., HUIs are published in the public place or misused. In this paper, we focus on the issues of HUIM and privacy-preserving utility mining (PPUM, and present two evolutionary algorithms to respectively mine HUIs and hide the sensitive high-utility itemsets in PPUM. Extensive experiments showed that the two proposed models for the applications of HUIM and PPUM can not only generate the high quality profitable itemsets according to the user-specified minimum utility threshold, but also enable the capability of privacy preserving for private or secure information (e.g., HUIs in real-word applications.

  13. Comparison of conventional study model measurements and 3D digital study model measurements from laser scanned dental impressions

    Science.gov (United States)

    Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.

    2017-08-01

    The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models

  14. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  15. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  16. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  17. Methodology and results of the impacts of modeling electric utilities: a comparative evaluation of MEMM and REM

    International Nuclear Information System (INIS)

    1981-09-01

    This study compares two models of the US electric utility industry including the EIA's electric utility submodel in the Midterm Energy Market Model (MEMM), and the Baughman-Joskow Regionalized Electricity Model (REM). The method of comparison emphasizes reconciliation of differences in data common to both models, and the performance of simulation experiments to evaluate the empirical significance of certain structural differences in the models. The major research goal was to contrast and compare the effects of alternative modeling structures and data assumptions on model results; and, particularly to considered each model's approach to the impacts of generation technology and fuel use choices on electric utilities. The methodology used was to run the REM model first without and, then, with a representation of the Power Plant and Industrial Fuel Act of 1978, assuming medium supply and demand curves and varying fuel prices. The models and data structures of the two models are described. The original 1978 data used in MEMM and REM are analyzed and compared. The computations and effects of different assumptions on fuel use decisions are discussed. The adjusted REM data required for the experiments are presented. Simulation results of the two models are compared. These results represent projections for 1985, 1990, and 1995 of: US power generation by plant type; amounts of each type of fuel used for power generation; average electricity prices; and the effects of additional or fewer nuclear and coal-fired plants. A significant result is that the REM model exhibits about 7 times as much gas and oil consumption in 1995 as the MEMM model. Continuing simulation experiments on MEMM are recommended to determine whether the input data to MEMM are reasonable and properly adjusted

  18. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  19. The Effect of Utilizing Organizational Culture Improvement Model of Patient Education on Coronary Artery Bypass Graft Patients' Anxiety and Satisfaction: Theory Testing.

    Science.gov (United States)

    Farahani, Mansoureh Ashghali; Ghaffari, Fatemeh; Norouzinezhad, Faezeh; Orak, Roohangiz Jamshidi

    2016-11-01

    Due to the increasing prevalence of arteriosclerosis and the mortality caused by this disease, Coronary Artery Bypass Graft (CABG) has become one of the most common surgical procedures. Utilization of patient education is approved as an effective solution for increasing patient survival and outcomes of treatment. However, failure to consider different aspects of patient education has turned this goal into an unattainable one. The objective of this research was to determine the effect of utilizing the organizational culture improvement model of patient education on CABG patients' anxiety and satisfaction. The present study is a randomized controlled trial. This study was conducted on eighty CABG patients. The patients were selected from the CCU and Post-CCU wards of a hospital affiliated with Iran University of Medical Sciences in Tehran, Iran, during 2015. Eshpel Burger's Anxiety Inventory and Patients' Satisfaction Questionnaire were used to collect the required information. Levels of anxiety and satisfaction of patients before intervention and at the time of release were measured. The intervention took place after preparing a programmed package based on the organizational culture improvement model for the following dimensions: effective communication, participatory decision-making, goal setting, planning, implementation and recording, supervision and control, and improvement of motivation. After recording the data, it was analyzed in the chi-square test, t-independent and Mann-Whitney U tests. The significance level of tests was assumed to be 0.05. SPSS version 18 was also utilized for data analysis. Research results revealed that variations in the mean scores of situational and personality anxiety of the control and experiment group were descending following the intervention, but the decrease was higher in the experiment group (p≤0.0001). In addition, the variations of the mean scores of patients' satisfaction with education were higher in the experiment group

  20. Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Facchinetti, Emanuele, E-mail: emanuele.facchinetti@hslu.ch [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland); Eid, Cherrelle [Faculty of Technology, Policy and Management, Delft University of Technology, Delft (Netherlands); Bollinger, Andrew [Urban Energy Systems Laboratory, EMPA, Dübendorf (Switzerland); Sulzer, Sabine [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland)

    2016-08-04

    The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.

  1. Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities

    International Nuclear Information System (INIS)

    Facchinetti, Emanuele; Eid, Cherrelle; Bollinger, Andrew; Sulzer, Sabine

    2016-01-01

    The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.

  2. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. Does Methodological Guidance Produce Consistency? A Review of Methodological Consistency in Breast Cancer Utility Value Measurement in NICE Single Technology Appraisals.

    Science.gov (United States)

    Rose, Micah; Rice, Stephen; Craig, Dawn

    2017-07-05

    Since 2004, National Institute for Health and Care Excellence (NICE) methodological guidance for technology appraisals has emphasised a strong preference for using the validated EuroQol 5-Dimensions (EQ-5D) quality-of-life instrument, measuring patient health status from patients or carers, and using the general public's preference-based valuation of different health states when assessing health benefits in economic evaluations. The aim of this study was to review all NICE single technology appraisals (STAs) for breast cancer treatments to explore consistency in the use of utility scores in light of NICE methodological guidance. A review of all published breast cancer STAs was undertaken using all publicly available STA documents for each included assessment. Utility scores were assessed for consistency with NICE-preferred methods and original data sources. Furthermore, academic assessment group work undertaken during the STA process was examined to evaluate the emphasis of NICE-preferred quality-of-life measurement methods. Twelve breast cancer STAs were identified, and many STAs used evidence that did not follow NICE's preferred utility score measurement methods. Recent STA submissions show companies using EQ-5D and mapping. Academic assessment groups rarely emphasized NICE-preferred methods, and queries about preferred methods were rare. While there appears to be a trend in recent STA submissions towards following NICE methodological guidance, historically STA guidance in breast cancer has generally not used NICE's preferred methods. Future STAs in breast cancer and reviews of older guidance should ensure that utility measurement methods are consistent with the NICE reference case to help produce consistent, equitable decision making.

  5. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function.

    Science.gov (United States)

    Resnick, Barbara; Gruber-Baldini, Ann L; Hicks, Gregory; Ostir, Glen; Klinedinst, N Jennifer; Orwig, Denise; Magaziner, Jay

    2016-07-01

    Measurement of physical function post hip fracture has been conceptualized using multiple different measures. This study tested a comprehensive measurement model of physical function. This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living, and performance was tested for fit at 2 and 12 months post hip fracture, and among male and female participants. Validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise, and social activities post hip fracture. The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participants. The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. © 2015 Association of Rehabilitation Nurses.

  6. A Proposed Conceptual Model to Measure Unwarranted Practice Variation

    National Research Council Canada - National Science Library

    Barr, Andrew M

    2007-01-01

    .... Employing a unit of analysis of the U.S. Army healthcare system and utilizing research by Wennberg and the Institute of Medicine, a model describing healthcare quality in terms of unwarranted practice variation and healthcare outcomes...

  7. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  8. Dynamic decision making without expected utility

    DEFF Research Database (Denmark)

    Nielsen, Thomas Dyhre; Jaffray, Jean-Yves

    2006-01-01

    Non-expected utility theories, such as rank dependent utility (RDU) theory, have been proposed as alternative models to EU theory in decision making under risk. These models do not share the separability property of expected utility theory. This implies that, in a decision tree, if the reduction...... maker’s discordant goals at the different decision nodes. Relative to the computations involved in the standard expected utility evaluation of a decision problem, the main computational increase is due to the identification of non-dominated strategies by linear programming. A simulation, using the rank...

  9. Modified Smith-predictor multirate control utilizing secondary process measurements

    Directory of Open Access Journals (Sweden)

    Rolf Ergon

    2007-01-01

    Full Text Available The Smith-predictor is a well-known control structure for industrial time delay systems, where the basic idea is to estimate the non-delayed process output by use of a process model, and to use this estimate in an inner feedback control loop combined with an outer feedback loop based on the delayed estimation error. The model used may be either mechanistic or identified from input-output data. The paper discusses improvements of the Smith-predictor for systems where also secondary process measurements without time delay are available as a basis for the primary output estimation. The estimator may then be identified also in the common case with primary outputs sampled at a lower rate than the secondary outputs. A simulation example demonstrates the feasibility and advantages of the suggested control structure.

  10. Performance of a methane-fueled single-cell SOFC stack at various levels of fuel utilization

    International Nuclear Information System (INIS)

    Ahmed, K.; Bolden, R.; Ramprakash and Foger, K.

    1998-01-01

    Fuel-gas mixtures representing 10 to 85% utilization of a methane-steam mixture at S/C=2 were fed to a single cell stack with a Ni-based anode at 875 deg C. Cell voltage and power output were recorded at current densities of 50 to 350 mA/cm 2 . The accompanying anode off-gas composition at some of these conditions were measured using on-line gas chromatograph and compared with the compositions predicted by a thermodynamic model based on the assumption of no carbon formation. Electrical losses were measured at a chosen current density at various levels of fuel utilization by the galvanostatic current-interruption technique. Cell voltage stability was monitored for up to 1000 h at two levels of fuel utilization. The stack performance was simulated using a mathematical model of the stack; the simulations were compared with the stack test data. Copyright (1998) Australasian Ceramic Society

  11. Real-time interferometric monitoring and measuring of photopolymerization based stereolithographic additive manufacturing process: sensor model and algorithm

    International Nuclear Information System (INIS)

    Zhao, X; Rosen, D W

    2017-01-01

    As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM and M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM and M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM and M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM and M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process. (paper)

  12. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  13. Measuring Collective Efficacy: A Multilevel Measurement Model for Nested Data

    Science.gov (United States)

    Matsueda, Ross L.; Drakulich, Kevin M.

    2016-01-01

    This article specifies a multilevel measurement model for survey response when data are nested. The model includes a test-retest model of reliability, a confirmatory factor model of inter-item reliability with item-specific bias effects, an individual-level model of the biasing effects due to respondent characteristics, and a neighborhood-level…

  14. Inter-utility trade review

    International Nuclear Information System (INIS)

    Warnes, E.M.; Vaahedi, E.

    1991-01-01

    The National Energy Board was requested by the Minister of Energy, Mines and Resources to identify possible measures to improve cooperation among Canadian electrical utilities and to enhance access for buyers and sellers of electricity to available transmission capacity through intervening systems for wheeling purposes. To identify measures to improve cooperation, a questionnaire was sent to electric utilities and other interested parties on the present extent and future possibilities for inter-utility cooperation. The questionnaire and its results are presented. It was found that there already exists a significant amount of inter-utility cooperation in Canada. Such cooperation generally involves interchanges of economy energy, non-economic capacity and energy, coordinated operation, resource sharing, maintenance scheduling, emergency supports, etc. There is a very limited degree of integrated generation expansion planning. Typically, these agreements are carried out under interconnection agreements negotiated on a bi-lateral basis. The highest current degree of cooperation exists under the auspices of the Alberta interconnected power system pool. Wheeling is limited and generally restricted to cases where the sender and receiver are the same entity or where power is wheeled to a utility purchasing it from the wheeler's system. 2 figs., 3 tabs

  15. Utility applications and broadband networks

    Energy Technology Data Exchange (ETDEWEB)

    Chebra, R.; Taylor, P.

    2003-02-01

    A detailed analytical model of a cable network that would be capable of providing utilities with such services as automatic meter reading, on-line ability to remotely connect and disconnect commodity service, outage notification, tamper detection, direct utility-initiated load control, indirect user prescribed load control, and user access to energy consumption information, is described. The paper provides an overview of of the zones of focus that must be addressed -- market assessment, competitive analysis, product identification, economic model development, assessment of skill set requirements, performance monitoring and tracking, and various technical issues -- to identify any gaps in the organisation's ability to fully develop such a plan. Developers of the model field tested it in 1995 using some benchmarks that were available at that time, and found that the benefit afforded by direct labor saving was not sufficient to cover the capital expenditure of the advanced utility gateway connected to the cable network. However, since 1995 the unanticipated shift in the derived consumer value from a host of cable-based communications services has rendered these original projections irrelevant. Since national communications organizations concentrate on 'tier one' or at best 'tier two' cities (roughly corresponding to the NFL franchise cities and baseball farm team cities), the uncovered rural and suburban areas of the country create a significant digital divide within the population. The developers of the model contend that these unserviced areas provide utilities, especially municipal utilities, with an excellent opportunity to step into the gap and provide a full range of services that includes water, electricity and communications. The proposed model provides the foundation for utilities upon which to base their ultimate implementation decisions.

  16. Predictive Utility of Personality Disorder in Depression: Comparison of Outcomes and Taxonomic Approach.

    Science.gov (United States)

    Newton-Howes, Giles; Mulder, Roger; Ellis, Pete M; Boden, Joseph M; Joyce, Peter

    2017-09-19

    There is debate around the best model for diagnosing personality disorder, both in terms of its relationship to the empirical data and clinical utility. Four randomized controlled trials examining various treatments for depression were analyzed at an individual patient level. Three different approaches to the diagnosis of personality disorder were analyzed in these patients. A total of 578 depressed patients were included in the analysis. Personality disorder, however measured, was of little predictive utility in the short term but added significantly to predictive modelling of medium-term outcomes, accounting for more than twice as much of the variance in social functioning outcome as depression psychopathology. Personality disorder assessment is of predictive utility with longer timeframes and when considering social outcomes as opposed to symptom counts. This utility is sufficiently great that there appears to be value in assessing personality; however, no particular approach outperforms any other.

  17. Business model innovation for Local Energy Management: a perspective from Swiss utilities

    Directory of Open Access Journals (Sweden)

    Emanuele Facchinetti

    2016-08-01

    Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.

  18. Influence of organizational characteristics and context on research utilization.

    Science.gov (United States)

    Cummings, Greta G; Estabrooks, Carole A; Midodzi, William K; Wallin, Lars; Hayduk, Leslie

    2007-01-01

    Despite three decades of empirical investigation into research utilization and a renewed emphasis on evidence-based medicine and evidence-based practice in the past decade, understanding of factors influencing research uptake in nursing remains limited. There is, however, increased awareness that organizational influences are important. To develop and test a theoretical model of organizational influences that predict research utilization by nurses and to assess the influence of varying degrees of context, based on the Promoting Action on Research Implementation in Health Services (PARIHS) framework, on research utilization and other variables. The study sample was drawn from a census of registered nurses working in acute care hospitals in Alberta, Canada, accessed through their professional licensing body (n = 6,526 nurses; 52.8% response rate). Three variables that measured PARIHS dimensions of context (culture, leadership, and evaluation) were used to sort cases into one of four mutually exclusive data sets that reflected less positive to more positive context. Then, a theoretical model of hospital- and unit-level influences on research utilization was developed and tested, using structural equation modeling, and 300 cases were randomly selected from each of the four data sets. Model test results were as follows--low context: chi2= 124.5, df = 80, p low: chi2= 144.2, p high: chi2= 157.3, df = 80, p low: chi2= 146.0, df = 80, p contexts with more positive culture, leadership, and evaluation also reported significantly more research utilization, staff development, and lower rates of patient and staff adverse events than did nurses working in less positive contexts (i.e., those that lacked positive culture, leadership, or evaluation). The findings highlight the combined importance of culture, leadership, and evaluation to increase research utilization and improve patient safety. The findings may serve to strengthen the PARIHS framework and to suggest that, although

  19. Market research for electric utilities

    International Nuclear Information System (INIS)

    Shippee, G.

    1999-01-01

    Marketing research is increasing in importance as utilities become more marketing oriented. Marketing research managers need to maintain autonomy from the marketing director or ad agency and make sure their work is relevant to the utility's operation. This article will outline a model marketing research program for an electric utility. While a utility may not conduct each and every type of research described, the programs presented offer a smorgasbord of activities which successful electric utility marketers often use or have access to

  20. Comparison between the Health Belief Model and Subjective Expected Utility Theory: predicting incontinence prevention behaviour in post-partum women.

    Science.gov (United States)

    Dolman, M; Chase, J

    1996-08-01

    A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly.

  1. Symmetry evaluation for an interferometric fiber optic gyro coil utilizing a bidirectional distributed polarization measurement system.

    Science.gov (United States)

    Peng, Feng; Li, Chuang; Yang, Jun; Hou, Chengcheng; Zhang, Haoliang; Yu, Zhangjun; Yuan, Yonggui; Li, Hanyang; Yuan, Libo

    2017-07-10

    We propose a dual-channel measurement system for evaluating the optical path symmetry of an interferometric fiber optic gyro (IFOG) coil. Utilizing a bidirectional distributed polarization measurement system, the forward and backward transmission performances of an IFOG coil are characterized simultaneously by just a one-time measurement. The simple but practical configuration is composed of a bidirectional Mach-Zehnder interferometer and multichannel transmission devices connected to the IFOG coil under test. The static and dynamic temperature results of the IFOG coil reveal that its polarization-related symmetric properties can be effectively obtained with high accuracy. The optical path symmetry investigation is highly beneficial in monitoring and improving the winding technology of an IFOG coil and reducing the nonreciprocal effect of an IFOG.

  2. Evaluation of remedial alternative of a LNAPL plume utilizing groundwater modeling

    International Nuclear Information System (INIS)

    Johnson, T.; Way, S.; Powell, G.

    1997-01-01

    The TIMES model was utilized to evaluate remedial options for a large LNAPL spill that was impacting the North Platte River in Glenrock, Wyoming. LNAPL was found discharging into the river from the adjoining alluvial aquifer. Subsequent investigations discovered an 18 hectare plume extended across the alluvium and into a sandstone bedrock outcrop to the south of the river. The TIMES model was used to estimate the LNAPL volume and to evaluate options for optimizing LNAPL recovery. Data collected from recovery and monitoring wells were used for model calibration. A LNAPL volume of 5.5 million L was estimated, over 3.0 million L of which is in the sandstone bedrock. An existing product recovery system was evaluated for its effectiveness. Three alternative recovery scenarios were also evaluated to aid in selecting the most cost-effective and efficient recovery system for the site. An active wellfield hydraulically upgradient of the existing recovery system was selected as most appropriate to augment the existing system in recovering LNAPL efficiently

  3. Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-06-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  4. Market research for electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Shippee, G.

    1999-12-01

    Marketing research is increasing in importance as utilities become more marketing oriented. Marketing research managers need to maintain autonomy from the marketing director or ad agency and make sure their work is relevant to the utility's operation. This article will outline a model marketing research program for an electric utility. While a utility may not conduct each and every type of research described, the programs presented offer a smorgasbord of activities which successful electric utility marketers often use or have access to.

  5. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  6. Double-label autoradiographic deoxyglucose method for sequential measurement of regional cerebral glucose utilization

    Energy Technology Data Exchange (ETDEWEB)

    Redies, C; Diksic, M; Evans, A C; Gjedde, A; Yamamoto, Y L

    1987-08-01

    A new double-label autoradiographic glucose analog method for the sequential measurement of altered regional cerebral metabolic rates for glucose in the same animal is presented. This method is based on the sequential injection of two boluses of glucose tracer labeled with two different isotopes (short-lived /sup 18/F and long-lived /sup 3/H, respectively). An operational equation is derived which allows the determination of glucose utilization for the time period before the injection of the second tracer; this equation corrects for accumulation and loss of the first tracer from the metabolic pool occurring after the injection of the second tracer. An error analysis of this operational equation is performed. The double-label deoxyglucose method is validated in the primary somatosensory (''barrel'') cortex of the anesthetized rat. Two different rows of whiskers were stimulated sequentially in each rat; the two periods of stimulation were each preceded by an injection of glucose tracer. After decapitation, dried brain slices were first exposed, in direct contact, to standard X-ray film and then to uncoated, ''tritium-sensitive'' film. Results show that the double-label deoxyglucose method proposed in this paper allows the quantification and complete separation of glucose utilization patterns elicited by two different stimulations sequentially applied in the same animal.

  7. Utilization of building information modeling in infrastructure’s design and construction

    Science.gov (United States)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  8. Clinical Utility and Safety of a Model-Based Patient-Tailored Dose of Vancomycin in Neonates.

    Science.gov (United States)

    Leroux, Stéphanie; Jacqz-Aigrain, Evelyne; Biran, Valérie; Lopez, Emmanuel; Madeleneau, Doriane; Wallon, Camille; Zana-Taïeb, Elodie; Virlouvet, Anne-Laure; Rioualen, Stéphane; Zhao, Wei

    2016-04-01

    Pharmacokinetic modeling has often been applied to evaluate vancomycin pharmacokinetics in neonates. However, clinical application of the model-based personalized vancomycin therapy is still limited. The objective of the present study was to evaluate the clinical utility and safety of a model-based patient-tailored dose of vancomycin in neonates. A model-based vancomycin dosing calculator, developed from a population pharmacokinetic study, has been integrated into the routine clinical care in 3 neonatal intensive care units (Robert Debré, Cochin Port Royal, and Clocheville hospitals) between 2012 and 2014. The target attainment rate, defined as the percentage of patients with a first therapeutic drug monitoring serum vancomycin concentration achieving the target window of 15 to 25 mg/liter, was selected as an endpoint for evaluating the clinical utility. The safety evaluation was focused on nephrotoxicity. The clinical application of the model-based patient-tailored dose of vancomycin has been demonstrated in 190 neonates. The mean (standard deviation) gestational and postnatal ages of the study population were 31.1 (4.9) weeks and 16.7 (21.7) days, respectively. The target attainment rate increased from 41% to 72% without any case of vancomycin-related nephrotoxicity. This proof-of-concept study provides evidence for integrating model-based antimicrobial therapy in neonatal routine care. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  9. On the use of prior information in modelling metabolic utilization of energy in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Fernández, José Adalberto

    2011-01-01

    Construction of models that provide a realistic representation of metabolic utilization of energy in growing animals tend to be over-parameterized because data generated from individual metabolic studies are often sparse. In the Bayesian framework prior information can enter the data analysis......, PD and LD) made on a given pig at a given time followed a multivariate normal distribution. Two different equation systems were adopted from Strathe et al. (2010), generating the expected values in the multivariate normal distribution. Non-informative prior distributions were assigned for all model......, kp and kf, respectively. Utilizing both sets of priors showed that the maintenance component was sensitive to the statement of prior belief and, hence, that the estimate of 0.91 MJkg0.60d1 (95% CI: 0.78; 1.09) should be interpreted with caution. It was shown that boars were superior in depositing...

  10. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  11. Sizing Up the Milky Way: A Bayesian Mixture Model Meta-analysis of Photometric Scale Length Measurements

    Science.gov (United States)

    Licquia, Timothy C.; Newman, Jeffrey A.

    2016-11-01

    The exponential scale length (L d ) of the Milky Way’s (MW’s) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and are often statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for L d , utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery, we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of L d available in the literature; these involve a broad assortment of observational data sets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for L d of {2.71}-0.20+0.22 kpc and {2.51}-0.13+0.15 kpc, respectively, whereas considering them all combined yields 2.64 ± 0.13 kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be {4.8}-1.1+1.5× {10}10 M ⊙, and the MW’s total stellar mass to be {5.7}-1.1+1.5× {10}10 M ⊙.

  12. Economic analysis of open space box model utilization in spacecraft

    Science.gov (United States)

    Mohammad, Atif F.; Straub, Jeremy

    2015-05-01

    It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.

  13. Formal Definition of Measures for BPMN Models

    Science.gov (United States)

    Reynoso, Luis; Rolón, Elvira; Genero, Marcela; García, Félix; Ruiz, Francisco; Piattini, Mario

    Business process models are currently attaining more relevance, and more attention is therefore being paid to their quality. This situation led us to define a set of measures for the understandability of BPMN models, which is shown in a previous work. We focus on understandability since a model must be well understood before any changes are made to it. These measures were originally informally defined in natural language. As is well known, natural language is ambiguous and may lead to misunderstandings and a misinterpretation of the concepts captured by a measure and the way in which the measure value is obtained. This has motivated us to provide the formal definition of the proposed measures using OCL (Object Constraint Language) upon the BPMN (Business Process Modeling Notation) metamodel presented in this paper. The main advantages and lessons learned (which were obtained both from the current work and from previous works carried out in relation to the formal definition of other measures) are also summarized.

  14. Channel Measurements and Modeling at 6 GHz in the Tunnel Environments for 5G Wireless Systems

    Directory of Open Access Journals (Sweden)

    Shuang-de Li

    2017-01-01

    Full Text Available Propagation measurements of wireless channels performed in the tunnel environments at 6 GHz are presented in this paper. Propagation characteristics are simulated and analyzed based on the method of shooting and bouncing ray tracing/image (SBR/IM. A good agreement is achieved between the measured results and simulated results, so the correctness of SBR/IM method has been validated. The measured results and simulated results are analyzed in terms of path loss models, received power, root mean square (RMS delay spread, Ricean K-factor, and angle of arrival (AOA. The omnidirectional path loss models are characterized based on close-in (CI free-space reference distance model and the alpha-beta-gamma (ABG model. Path loss exponents (PLEs are 1.50–1.74 in line-of-sight (LOS scenarios and 2.18–2.20 in non-line-of-sight (NLOS scenarios. Results show that CI model with the reference distance of 1 m provides more accuracy and stability in tunnel scenarios. The RMS delay spread values vary between 2.77 ns and 18.76 ns. Specially, the Poisson distribution best fits the measured data of RMS delay spreads for LOS scenarios and the Gaussian distribution best fits the measured data of RMS delay spreads for NLOS scenarios. Moreover, the normal distribution provides good fits to the Ricean K-factor. The analysis of the abovementioned results from channel measurements and simulations may be utilized for the design of wireless communications of future 5G radio systems at 6 GHz.

  15. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and

  16. Utility-based early modulation of processing distracting stimulus information.

    Science.gov (United States)

    Wendt, Mike; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-12-10

    Humans are selective information processors who efficiently prevent goal-inappropriate stimulus information to gain control over their actions. Nonetheless, stimuli, which are both unnecessary for solving a current task and liable to cue an incorrect response (i.e., "distractors"), frequently modulate task performance, even when consistently paired with a physical feature that makes them easily discernible from target stimuli. Current models of cognitive control assume adjustment of the processing of distractor information based on the overall distractor utility (e.g., predictive value regarding the appropriate response, likelihood to elicit conflict with target processing). Although studies on distractor interference have supported the notion of utility-based processing adjustment, previous evidence is inconclusive regarding the specificity of this adjustment for distractor information and the stage(s) of processing affected. To assess the processing of distractors during sensory-perceptual phases we applied EEG recording in a stimulus identification task, involving successive distractor-target presentation, and manipulated the overall distractor utility. Behavioral measures replicated previously found utility modulations of distractor interference. Crucially, distractor-evoked visual potentials (i.e., posterior N1) were more pronounced in high-utility than low-utility conditions. This effect generalized to distractors unrelated to the utility manipulation, providing evidence for item-unspecific adjustment of early distractor processing to the experienced utility of distractor information. Copyright © 2014 the authors 0270-6474/14/3416720-06$15.00/0.

  17. Probabilistic mapping of descriptive health status responses onto health state utilities using Bayesian networks: an empirical analysis converting SF-12 into EQ-5D utility index in a national US sample.

    Science.gov (United States)

    Le, Quang A; Doctor, Jason N

    2011-05-01

    As quality-adjusted life years have become the standard metric in health economic evaluations, mapping health-profile or disease-specific measures onto preference-based measures to obtain quality-adjusted life years has become a solution when health utilities are not directly available. However, current mapping methods are limited due to their predictive validity, reliability, and/or other methodological issues. We employ probability theory together with a graphical model, called a Bayesian network, to convert health-profile measures into preference-based measures and to compare the results to those estimated with current mapping methods. A sample of 19,678 adults who completed both the 12-item Short Form Health Survey (SF-12v2) and EuroQoL 5D (EQ-5D) questionnaires from the 2003 Medical Expenditure Panel Survey was split into training and validation sets. Bayesian networks were constructed to explore the probabilistic relationships between each EQ-5D domain and 12 items of the SF-12v2. The EQ-5D utility scores were estimated on the basis of the predicted probability of each response level of the 5 EQ-5D domains obtained from the Bayesian inference process using the following methods: Monte Carlo simulation, expected utility, and most-likely probability. Results were then compared with current mapping methods including multinomial logistic regression, ordinary least squares, and censored least absolute deviations. The Bayesian networks consistently outperformed other mapping models in the overall sample (mean absolute error=0.077, mean square error=0.013, and R overall=0.802), in different age groups, number of chronic conditions, and ranges of the EQ-5D index. Bayesian networks provide a new robust and natural approach to map health status responses into health utility measures for health economic evaluations.

  18. About the parametrizations utilized to perform magnetic moments measurements using the transient field technique

    Energy Technology Data Exchange (ETDEWEB)

    Gómez, A. M., E-mail: amgomezl-1@uqvirtual.edu.co [Programa de Física, Universidad del Quindo (Colombia); Torres, D. A., E-mail: datorresg@unal.edu.co [Physics Department, Universidad Nacional de Colombia, Bogotá (Colombia)

    2016-07-07

    The experimental study of nuclear magnetic moments, using the Transient Field technique, makes use of spin-orbit hyperfine interactions to generate strong magnetic fields, above the kilo-Tesla regime, capable to create a precession of the nuclear spin. A theoretical description of such magnetic fields is still under theoretical research, and the use of parametrizations is still a common way to address the lack of theoretical information. In this contribution, a review of the main parametrizations utilized in the measurements of Nuclear Magnetic Moments will be presented, the challenges to create a theoretical description from first principles will be discussed.

  19. A Steam Utility Network Model for the Evaluation of Heat Integration Retrofits – A Case Study of an Oil Refinery

    Directory of Open Access Journals (Sweden)

    Sofie Marton

    2017-12-01

    Full Text Available This paper presents a real industrial example in which the steam utility network of a refinery is modelled in order to evaluate potential Heat Integration retrofits proposed for the site. A refinery, typically, has flexibility to optimize the operating strategy for the steam system depending on the operation of the main processes. This paper presents a few examples of Heat Integration retrofit measures from a case study of a large oil refinery. In order to evaluate expected changes in fuel and electricity imports to the refinery after implementation of the proposed retrofits, a steam system model has been developed. The steam system model has been tested and validated with steady state data from three different operating scenarios and can be used to evaluate how changes to steam balances at different pressure levels would affect overall steam balances, generation of shaft power in turbines, and the consumption of fuel gas.

  20. Using DORIS measurements for ionosphere modeling

    Science.gov (United States)

    Dettmering, Denise; Schmidt, Michael; Limberger, Marco

    2013-04-01

    Nowadays, most of the ionosphere models used in geodesy are based on terrestrial GNSS measurements and describe the Vertical Total Electron Content (VTEC) depending on longitude, latitude, and time. Since modeling the height distribution of the electrons is difficult due to the measurement geometry, the VTEC maps are based on the the assumption of a single-layer ionosphere. Moreover, the accuracy of the VTEC maps is different for different regions of the Earth, because the GNSS stations are unevenly distributed over the globe and some regions (especially the ocean areas) are not very well covered by observations. To overcome the unsatisfying measurement geometry of the terrestrial GNSS measurements and to take advantage of the different sensitivities of other space-geodetic observation techniques, we work on the development of multi-dimensional models of the ionosphere from the combination of modern space-geodetic satellite techniques. Our approach consists of a given background model and an unknown correction part expanded in terms of B-spline functions. Different space-geodetic measurements are used to estimate the unknown model coefficients. In order to take into account the different accuracy levels of the observations, a Variance Component Estimation (VCE) is applied. We already have proven the usefulness of radio occultation data from space-borne GPS receivers and of two-frequency altimetry data. Currently, we test the capability of DORIS observations to derive ionospheric parameters such as VTEC. Although DORIS was primarily designed for precise orbit computation of satellites, it can be used as a tool to study the Earth's ionosphere. The DORIS ground beacons are almost globally distributed and the system is on board of various Low Earth Orbiters (LEO) with different orbit heights, such as Jason-2, Cryosat-2, and HY-2. The last generation of DORIS receivers directly provides phase measurements on two frequencies. In this contribution, we test the DORIS

  1. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  2. Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy

    DEFF Research Database (Denmark)

    Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R

    2013-01-01

    Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....

  3. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  4. Using Technical Performance Measures

    Science.gov (United States)

    Garrett, Christopher J.; Levack, Daniel J. H.; Rhodes, Russel E.

    2011-01-01

    All programs have requirements. For these requirements to be met, there must be a means of measurement. A Technical Performance Measure (TPM) is defined to produce a measured quantity that can be compared to the requirement. In practice, the TPM is often expressed as a maximum or minimum and a goal. Example TPMs for a rocket program are: vacuum or sea level specific impulse (lsp), weight, reliability (often expressed as a failure rate), schedule, operability (turn-around time), design and development cost, production cost, and operating cost. Program status is evaluated by comparing the TPMs against specified values of the requirements. During the program many design decisions are made and most of them affect some or all of the TPMs. Often, the same design decision changes some TPMs favorably while affecting other TPMs unfavorably. The problem then becomes how to compare the effects of a design decision on different TPMs. How much failure rate is one second of specific impulse worth? How many days of schedule is one pound of weight worth? In other words, how to compare dissimilar quantities in order to trade and manage the TPMs to meet all requirements. One method that has been used successfully and has a mathematical basis is Utility Analysis. Utility Analysis enables quantitative comparison among dissimilar attributes. It uses a mathematical model that maps decision maker preferences over the tradeable range of each attribute. It is capable of modeling both independent and dependent attributes. Utility Analysis is well supported in the literature on Decision Theory. It has been used at Pratt & Whitney Rocketdyne for internal programs and for contracted work such as the J-2X rocket engine program. This paper describes the construction of TPMs and describes Utility Analysis. It then discusses the use of TPMs in design trades and to manage margin during a program using Utility Analysis.

  5. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    Science.gov (United States)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  6. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  7. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  8. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  9. The Effect of Geographic Units of Analysis on Measuring Geographic Variation in Medical Services Utilization

    Directory of Open Access Journals (Sweden)

    Agnus M. Kim

    2016-07-01

    Full Text Available Objectives: We aimed to evaluate the effect of geographic units of analysis on measuring geographic variation in medical services utilization. For this purpose, we compared geographic variations in the rates of eight major procedures in administrative units (districts and new areal units organized based on the actual health care use of the population in Korea. Methods: To compare geographic variation in geographic units of analysis, we calculated the age–sex standardized rates of eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee-replacement surgery, caesarean section, hysterectomy, computed tomography scan, and magnetic resonance imaging scan from the National Health Insurance database in Korea for the 2013 period. Using the coefficient of variation, the extremal quotient, and the systematic component of variation, we measured geographic variation for these eight procedures in districts and new areal units. Results: Compared with districts, new areal units showed a reduction in geographic variation. Extremal quotients and inter-decile ratios for the eight procedures were lower in new areal units. While the coefficient of variation was lower for most procedures in new areal units, the pattern of change of the systematic component of variation between districts and new areal units differed among procedures. Conclusions: Geographic variation in medical service utilization could vary according to the geographic unit of analysis. To determine how geographic characteristics such as population size and number of geographic units affect geographic variation, further studies are needed.

  10. Metabolic Engineering for Substrate Co-utilization

    Science.gov (United States)

    Gawand, Pratish

    Production of biofuels and bio-based chemicals is being increasingly pursued by chemical industry to reduce its dependence on petroleum. Lignocellulosic biomass (LCB) is an abundant source of sugars that can be used for producing biofuels and bio-based chemicals using fermentation. Hydrolysis of LCB results in a mixture of sugars mainly composed of glucose and xylose. Fermentation of such a sugar mixture presents multiple technical challenges at industrial scale. Most industrial microorganisms utilize sugars in a sequential manner due to the regulatory phenomenon of carbon catabolite repression (CCR). Due to sequential utilization of sugars, the LCB-based fermentation processes suffer low productivities and complicated operation. Performance of fermentation processes can be improved by metabolic engineering of microorganisms to obtain superior characteristics such as high product yield. With increased computational power and availability of complete genomes of microorganisms, use of model-based metabolic engineering is now a common practice. The problem of sequential sugar utilization, however, is a regulatory problem, and metabolic models have never been used to solve such regulatory problems. The focus of this thesis is to use model-guided metabolic engineering to construct industrial strains capable of co-utilizing sugars. First, we develop a novel bilevel optimization algorithm SimUp, that uses metabolic models to identify reaction deletion strategies to force co-utilization of two sugars. We then use SimUp to identify reaction deletion strategies to force glucose-xylose co-utilization in Escherichia coli. To validate SimUp predictions, we construct three mutants with multiple gene knockouts and test them for glucose-xylose utilization characteristics. Two mutants, designated as LMSE2 and LMSE5, are shown to co-utilize glucose and xylose in agreement with SimUp predictions. To understand the molecular mechanism involved in glucose-xylose co-utilization of the

  11. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  12. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. The development of multi-objective optimization model for excess bagasse utilization: A case study for Thailand

    International Nuclear Information System (INIS)

    Buddadee, Bancha; Wirojanagud, Wanpen; Watts, Daniel J.; Pitakaso, Rapeepan

    2008-01-01

    In this paper, a multi-objective optimization model is proposed as a tool to assist in deciding for the proper utilization scheme of excess bagasse produced in sugarcane industry. Two major scenarios for excess bagasse utilization are considered in the optimization. The first scenario is the typical situation when excess bagasse is used for the onsite electricity production. In case of the second scenario, excess bagasse is processed for the offsite ethanol production. Then the ethanol is blended with an octane rating of 91 gasoline by a portion of 10% and 90% by volume respectively and the mixture is used as alternative fuel for gasoline vehicles in Thailand. The model proposed in this paper called 'Environmental System Optimization' comprises the life cycle impact assessment of global warming potential (GWP) and the associated cost followed by the multi-objective optimization which facilitates in finding out the optimal proportion of the excess bagasse processed in each scenario. Basic mathematical expressions for indicating the GWP and cost of the entire process of excess bagasse utilization are taken into account in the model formulation and optimization. The outcome of this study is the methodology developed for decision-making concerning the excess bagasse utilization available in Thailand in view of the GWP and economic effects. A demonstration example is presented to illustrate the advantage of the methodology which may be used by the policy maker. The methodology developed is successfully performed to satisfy both environmental and economic objectives over the whole life cycle of the system. It is shown in the demonstration example that the first scenario results in positive GWP while the second scenario results in negative GWP. The combination of these two scenario results in positive or negative GWP depending on the preference of the weighting given to each objective. The results on economics of all scenarios show the satisfied outcomes

  14. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques, s......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  15. Measuring corporate social responsibility using composite indices: Mission impossible? The case of the electricity utility industry

    Directory of Open Access Journals (Sweden)

    Juan Diego Paredes-Gazquez

    2016-01-01

    Full Text Available Corporate social responsibility is a multidimensional concept that is often measured using diverse indicators. Composite indices can aggregate these single indicators into one measurement. This article aims to identify the key challenges in constructing a composite index for measuring corporate social responsibility. The process is illustrated by the construction of a composite index for measuring social outcomes in the electricity utility industry. The sample consisted of seventy-four companies from twenty-three different countries, and one special administrative region operating in the industry in 2011. The findings show that (1 the unavailability of information about corporate social responsibility, (2 the particular characteristics of this information and (3 the weighting of indicators are the main obstacles when constructing the composite index. We highlight than an effective composite index should has a clear objective, a solid theoretical background and a robust structure. In a practical sense, it should be reconsidered how researchers use composite indexes to measure corporate social responsibility, as more transparency and stringency is needed when constructing these tools.

  16. Utilization of arterial blood gas measurements in a large tertiary care hospital.

    Science.gov (United States)

    Melanson, Stacy E F; Szymanski, Trevor; Rogers, Selwyn O; Jarolim, Petr; Frendl, Gyorgy; Rawn, James D; Cooper, Zara; Ferrigno, Massimo

    2007-04-01

    We describe the patterns of utilization of arterial blood gas (ABG) tests in a large tertiary care hospital. To our knowledge, no hospital-wide analysis of ABG test utilization has been published. We analyzed 491 ABG tests performed during 24 two-hour intervals, representative of different staff shifts throughout the 7-day week. The clinician ordering each ABG test was asked to fill out a utilization survey. The most common reasons for requesting an ABG test were changes in ventilator settings (27.6%), respiratory events (26.4%), and routine (25.7%). Of the results, approximately 79% were expected, and a change in patient management (eg, a change in ventilator settings) occurred in 42% of cases. Many ABG tests were ordered as part of a clinical routine or to monitor parameters that can be assessed clinically or through less invasive testing. Implementation of practice guidelines may prove useful in controlling test utilization and in decreasing costs.

  17. Modeling a Packed Bed Reactor Utilizing the Sabatier Process

    Science.gov (United States)

    Shah, Malay G.; Meier, Anne J.; Hintze, Paul E.

    2017-01-01

    A numerical model is being developed using Python which characterizes the conversion and temperature profiles of a packed bed reactor (PBR) that utilizes the Sabatier process; the reaction produces methane and water from carbon dioxide and hydrogen. While the specific kinetics of the Sabatier reaction on the RuAl2O3 catalyst pellets are unknown, an empirical reaction rate equation1 is used for the overall reaction. As this reaction is highly exothermic, proper thermal control is of the utmost importance to ensure maximum conversion and to avoid reactor runaway. It is therefore necessary to determine what wall temperature profile will ensure safe and efficient operation of the reactor. This wall temperature will be maintained by active thermal controls on the outer surface of the reactor. Two cylindrical PBRs are currently being tested experimentally and will be used for validation of the Python model. They are similar in design except one of them is larger and incorporates a preheat loop by feeding the reactant gas through a pipe along the center of the catalyst bed. The further complexity of adding a preheat pipe to the model to mimic the larger reactor is yet to be implemented and validated; preliminary validation is done using the smaller PBR with no reactant preheating. When mapping experimental values of the wall temperature from the smaller PBR into the Python model, a good approximation of the total conversion and temperature profile has been achieved. A separate CFD model incorporates more complex three-dimensional effects by including the solid catalyst pellets within the domain. The goal is to improve the Python model to the point where the results of other reactor geometry can be reasonably predicted relatively quickly when compared to the much more computationally expensive CFD approach. Once a reactor size is narrowed down using the Python approach, CFD will be used to generate a more thorough prediction of the reactors performance.

  18. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    Science.gov (United States)

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing

  19. Utilization and cost of a new model of care for managing acute knee injuries: the Calgary acute knee injury clinic

    Directory of Open Access Journals (Sweden)

    Lau Breda HF

    2012-12-01

    Full Text Available Abstract Background Musculoskeletal disorders (MSDs affect a large proportion of the Canadian population and present a huge problem that continues to strain primary healthcare resources. Currently, the Canadian healthcare system depicts a clinical care pathway for MSDs that is inefficient and ineffective. Therefore, a new inter-disciplinary team-based model of care for managing acute knee injuries was developed in Calgary, Alberta, Canada: the Calgary Acute Knee Injury Clinic (C-AKIC. The goal of this paper is to evaluate and report on the appropriateness, efficiency, and effectiveness of the C-AKIC through healthcare utilization and costs associated with acute knee injuries. Methods This quasi-experimental study measured and evaluated cost and utilization associated with specific healthcare services for patients presenting with acute knee injuries. The goal was to compare patients receiving care from two clinical care pathways: the existing pathway (i.e. comparison group and a new model, the C-AKIC (i.e. experimental group. This was accomplished through the use of a Healthcare Access and Patient Satisfaction Questionnaire (HAPSQ. Results Data from 138 questionnaires were analyzed in the experimental group and 136 in the comparison group. A post-hoc analysis determined that both groups were statistically similar in socio-demographic characteristics. With respect to utilization, patients receiving care through the C-AKIC used significantly less resources. Overall, patients receiving care through the C-AKIC incurred 37% of the cost of patients with knee injuries in the comparison group and significantly incurred less costs when compared to the comparison group. The total aggregate average cost for the C-AKIC group was $2,549.59 compared to $6,954.33 for the comparison group (p Conclusions The Calgary Acute Knee Injury Clinic was able to manage and treat knee injured patients for less cost than the existing state of healthcare delivery. The

  20. Assessment of Retrofitting Measures for a Large Historic Research Facility Using a Building Energy Simulation Model

    Directory of Open Access Journals (Sweden)

    Young Tae Chae

    2016-06-01

    Full Text Available A calibrated building simulation model was developed to assess the energy performance of a large historic research building. The complexity of space functions and operational conditions with limited availability of energy meters makes it hard to understand the end-used energy consumption in detail and to identify appropriate retrofitting options for reducing energy consumption and greenhouse gas (GHG emissions. An energy simulation model was developed to study the energy usage patterns not only at a building level, but also of the internal thermal zones, and system operations. The model was validated using site measurements of energy usage and a detailed audit of the internal load conditions, system operation, and space programs to minimize the discrepancy between the documented status and actual operational conditions. Based on the results of the calibrated model and end-used energy consumption, the study proposed potential energy conservation measures (ECMs for the building envelope, HVAC system operational methods, and system replacement. It also evaluated each ECM from the perspective of both energy and utility cost saving potentials to help retrofitting plan decision making. The study shows that the energy consumption of the building was highly dominated by the thermal requirements of laboratory spaces. Among other ECMs the demand management option of overriding the setpoint temperature is the most cost effective measure.

  1. National Utility Rate Database: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ong, S.; McKeel, R.

    2012-08-01

    When modeling solar energy technologies and other distributed energy systems, using high-quality expansive electricity rates is essential. The National Renewable Energy Laboratory (NREL) developed a utility rate platform for entering, storing, updating, and accessing a large collection of utility rates from around the United States. This utility rate platform lives on the Open Energy Information (OpenEI) website, OpenEI.org, allowing the data to be programmatically accessed from a web browser, using an application programming interface (API). The semantic-based utility rate platform currently has record of 1,885 utility rates and covers over 85% of the electricity consumption in the United States.

  2. FAILING YET AGAIN TO IMPRESS: RECRUITMENT UTILITY ANALYSIS - AN INNOVATION IMPLEMENTATION

    OpenAIRE

    James, Theresa

    2010-01-01

    The research area of recruitment utility analysis (RUA) models has been somewhat unexplored for decades, and has earlier been reduced to simplified mathematic formulas measuring only dollar term value. The need for more dynamic models and theories surrounding the area has been voiced numerous times, yet little has been done. The purpose of this study was to highlight this need to encourage to further research, and to examine the managerial perspective on RUA from a semi explorative perspectiv...

  3. A Study of How the Watts-Strogatz Model Relates to an Economic System’s Utility

    OpenAIRE

    Lunhan Luo; Jianan Fang

    2014-01-01

    Watts-Strogatz model is a main mechanism to construct the small-world networks. It is widely used in the simulations of small-world featured systems including economic system. Formally, the model contains a parameters set including three variables representing group size, number of neighbors, and rewiring probability. This paper discusses how the parameters set relates to the economic system performance which is utility growth rate. In conclusion, it is found that, regardless of the group siz...

  4. Utility of a human-mouse xenograft model and in vivo near-infrared fluorescent imaging for studying wound healing.

    Science.gov (United States)

    Shanmugam, Victoria K; Tassi, Elena; Schmidt, Marcel O; McNish, Sean; Baker, Stephen; Attinger, Christopher; Wang, Hong; Shara, Nawar; Wellstein, Anton

    2015-12-01

    To study the complex cellular interactions involved in wound healing, it is essential to have an animal model that adequately mimics the human wound microenvironment. Currently available murine models are limited because wound contraction introduces bias into wound surface area measurements. The purpose of this study was to demonstrate utility of a human-mouse xenograft model for studying human wound healing. Normal human skin was harvested from elective abdominoplasty surgery, xenografted onto athymic nude (nu/nu) mice, and allowed to engraft for 3 months. The graft was then wounded using a 2-mm punch biopsy. Wounds were harvested on sequential days to allow tissue-based markers of wound healing to be followed sequentially. On the day of wound harvest, mice were injected with XenoLight RediJect cyclooxygenase-2 (COX-2) probe and imaged according to package instructions. Immunohistochemistry confirms that this human-mouse xenograft model is effective for studying human wound healing in vivo. Additionally, in vivo fluorescent imaging for inducible COX-2 demonstrated upregulation from baseline to day 4 (P = 0·03) with return to baseline levels by day 10, paralleling the reepithelialisation of the wound. This human-mouse xenograft model, combined with in vivo fluorescent imaging provides a useful mechanism for studying molecular pathways of human wound healing. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  5. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Measurement control program at model facility

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    A measurement control program for the model plant is described. The discussion includes the technical basis for such a program, the application of measurement control principles to each measurement, and the use of special experiments to estimate measurement error parameters for difficult-to-measure materials. The discussion also describes the statistical aspects of the program, and the documentation procedures used to record, maintain, and process the basic data

  7. The headache under-response to treatment (HURT) questionnaire, an outcome measure to guide follow-up in primary care: development, psychometric evaluation and assessment of utility.

    Science.gov (United States)

    Steiner, T J; Buse, D C; Al Jumah, M; Westergaard, M L; Jensen, R H; Reed, M L; Prilipko, L; Mennini, F S; Láinez, M J A; Ravishankar, K; Sakai, F; Yu, S-Y; Fontebasso, M; Al Khathami, A; MacGregor, E A; Antonaci, F; Tassorelli, C; Lipton, R B

    2018-02-14

    Headache disorders are both common and burdensome but, given the many people affected, provision of health care to all is challenging. Structured headache services based in primary care are the most efficient, equitable and cost-effective solution but place responsibility for managing most patients on health-care providers with limited training in headache care. The development of practical management aids for primary care is therefore a purpose of the Global Campaign against Headache. This manuscript presents an outcome measure, the Headache Under-Response to Treatment (HURT) questionnaire, describing its purpose, development, psychometric evaluation and assessment for clinical utility. The objective was a simple-to-use instrument that would both assess outcome and provide guidance to improving outcome, having utility across the range of headache disorders, across clinical settings and across countries and cultures. After literature review, an expert consensus group drawn from all six world regions formulated HURT through item development and item reduction using item-response theory. Using the American Migraine Prevalence and Prevention Study's general-population respondent panel, two mailed surveys assessed the psychometric properties of HURT, comparing it with other instruments as external validators. Reliability was assessed in patients in two culturally-contrasting clinical settings: headache specialist centres in Europe (n = 159) and primary-care centres in Saudi Arabia (n = 40). Clinical utility was assessed in similar settings (Europe n = 201; Saudi Arabia n = 342). The final instrument, an 8-item self-administered questionnaire, addressed headache frequency, disability, medication use and effect, patients' perceptions of headache "control" and their understanding of their diagnoses. Psychometric evaluation revealed a two-factor model (headache frequency, disability and medication use; and medication efficacy and headache control), with

  8. Allocating provider resources to diagnose and treat restless legs syndrome: a cost-utility analysis.

    Science.gov (United States)

    Padula, William V; Phelps, Charles E; Moran, Dane; Earley, Christopher

    2017-10-01

    Restless legs syndrome (RLS) is a neurological disorder that is frequently misdiagnosed, resulting in delays in proper treatment. The objective of this study was to analyze the cost-utility of training primary care providers (PCP) in early and accurate diagnosis of RLS. We used a Markov model to compare two strategies: one where PCPs received training to diagnose RLS (informed care) and one where PCPs did not receive training (standard care). This analysis was conducted from the US societal and health sector perspectives over one-year, five-year, and lifetime (50-year) horizons. Costs were adjusted to 2016 USD, utilities measured as quality-adjusted life-years (QALYs), and both measures were discounted annually at 3%. Cost, utilities, and probabilities for the model were obtained through a comprehensive review of literature. An incremental cost-effectiveness ratio (ICER) was calculated to interpret our findings at a willingness-to-pay threshold of $100,000/QALY. Univariate and multivariate analyses were conducted to test model uncertainty, in addition to calculating the expected value of perfect information. Providing training to PCPs to correctly diagnose RLS was cost-effective since it cost $2021 more and gained 0.44 QALYs per patient over the course of a lifetime, resulting in an ICER of $4593/QALY. The model was sensitive to the utility for treated and untreated RLS. The probabilistic sensitivity analysis revealed that at $100,000/QALY, informed care had a 65.5% probability of being cost-effective. A program to train PCPs to better diagnose RLS appears to be a cost-effective strategy for improving outcomes for RLS patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  10. Different initiatives across Europe to enhance losartan utilization post generics: impact and implications

    Science.gov (United States)

    Moon, James C.; Godman, Brian; Petzold, Max; Alvarez-Madrazo, Samantha; Bennett, Kathleen; Bishop, Iain; Bucsics, Anna; Hesse, Ulrik; Martin, Andrew; Simoens, Steven; Zara, Corinne; Malmström, Rickard E.

    2014-01-01

    Introduction: There is an urgent need for health authorities across Europe to fully realize potential savings from increased use of generics to sustain their healthcare systems. A variety of strategies were used across Europe following the availability of generic losartan, the first angiotensin receptor blocker (ARB) to be approved and marketed, to enhance its prescribing vs. single-sourced drugs in the class. Demand-side strategies ranged from 100% co-payment for single-sourced ARBs in Denmark to no specific measures. We hypothesized this heterogeneity of approaches would provide opportunities to explore prescribing in a class following patent expiry. Objective: Contrast the impact of the different approaches among European countries and regions to the availability of generic losartan to provide future guidance. Methodology: Retrospective segmented regression analyses applying linear random coefficient models with country specific intercepts and slopes were used to assess the impact of the various initiatives across Europe following the availability of generic losartan. Utilization measured in defined daily doses (DDDs). Price reductions for generic losartan were also measured. Results: Utilization of losartan was over 90% of all ARBs in Denmark by the study end. Multiple measures in Sweden and one English primary care group also appreciably enhanced losartan utilization. Losartan utilization actually fell in some countries with no specific demand-side measures. Considerable differences were seen in the prices of generic losartan. Conclusion: Delisting single-sourced ARBs produced the greatest increase in losartan utilization. Overall, multiple demand-side measures are needed to change physician prescribing habits to fully realize savings from generics. There is no apparent “spill over” effect from one class to another to influence future prescribing patterns even if these are closely related. PMID:25339902

  11. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Directory of Open Access Journals (Sweden)

    Seung-Hwan Yang

    2016-03-01

    Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

  12. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.

    2016-11-01

    If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)

  13. Analysis and Characterization of Damage and Failure Utilizing a Generalized Composite Material Model Suitable for Use in Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Khaled, Bilal; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in state-of-the art composite impact models is under development. In particular, a next generation composite impact material model, jointly developed by the FAA and NASA, is being implemented into the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage, and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters (such as modulus and strength). The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in the various coordinate directions. Due to the fact that the plasticity and damage models are uncoupled, test procedures and methods to both characterize the damage model and to covert the material stress-strain curves from the true (damaged) stress space to the effective (undamaged) stress space have been developed. A methodology has been developed to input the experimentally determined composite failure surface in a tabulated manner. An analytical approach is then utilized to track how close the current stress state is to the failure surface.

  14. Measurement-based harmonic current modeling of mobile storage for power quality study in the distribution system

    Directory of Open Access Journals (Sweden)

    Wenge Christoph

    2017-12-01

    Full Text Available Electric vehicles (EVs can be utilized as mobile storages in a power system. The use of battery chargers can cause current harmonics in the supplied AC system. In order to analyze the impact of different EVs with regardto their number and their emission of current harmonics, a generic harmonic current model of EV types was built and implemented in the power system simulation tool PSS®NETOMAC. Based on the measurement data for different types of EVs three standardized harmonic EV models were developed and parametrized. Further, the identified harmonic models are used by the computation of load flow in a modeled, German power distribution system. As a benchmark, a case scenario was studied regarding a high market penetration of EVs in the year 2030 for Germany. The impact of the EV charging on the power distribution system was analyzed and evaluated with valid power quality standards.

  15. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  16. Preferences UnderUncertainty and the Deficiencies of the Expected Utility Model

    OpenAIRE

    Murat Tasdemir

    2007-01-01

    In economics, the prevailing framework to explain preferences under uncerta- inty is the Expected Utility theory. Despite its widespread use, the Expected Utility theory is not free from problems. Experimental and empirical works shows that, in real life, the choices of individuals among risky alternatives conflict with the axioms of the Expected Utility theory. This study, in the light of experimental studies, investigates the problems with the Expected Utility theory regarding the individua...

  17. Practical strategies of wind energy utilization for uninhabited aerial vehicles in loiter flights

    Science.gov (United States)

    Singhania, Hong Yang

    Uninhabited Aerial Vehicle (UAV) is becoming increasingly attractive in missions where human presence is undesirable or impossible. Agile maneuvers and long endurance are among the most desired advantages of UAVs over aircraft that have human pilots onboard. Past studies suggest that the performance of UAVs may be considerably improved by utilizing natural resources, especially wind energy, during flights. The key challenge of exploiting wind energy in practical UAV operations lies in the availability of reliable and timely wind field information in the operational region. This thesis presents a practical onboard strategy that attempts to over-come this challenge, to enable UAVs in utilizing wind energy effectively during flights, and therefore to enhance performance. We propose and explore a strategy that combines wind measurement and optimal trajectory planning onboard UAVs. During a cycle of a loiter flight, a UAV can take measurements of wind velocity components over the flight region, use these measurements to estimate the local wind field through a model-based approach, and then compute a flight trajectory for the next flight cycle with the objective of optimizing fuel. As the UAV follows the planned trajectory, it continues to measure the wind components and repeats the process of updating the wind model with new estimations and planning optimal trajectories for the next flight cycle. Besides presenting an onboard trajectory planning strategy of wind energy exploration, estimation, and utilization, this research also develops a semi-analytical linearized solution to the formulated nonlinear optimal control problem. Simulations and numerical results indicate that the fuel savings of trajectories generated using the proposed scheme depend on wind speed, wind estimation errors, rates of change in wind speed, and the wind model structures. For a given wind field, the magnitude of potential fuel savings is also contingent upon UAVs' performance capabilities.

  18. Incentive-Based Primary Care: Cost and Utilization Analysis.

    Science.gov (United States)

    Hollander, Marcus J; Kadlec, Helena

    2015-01-01

    In its fee-for-service funding model for primary care, British Columbia, Canada, introduced incentive payments to general practitioners as pay for performance for providing enhanced, guidelines-based care to patients with chronic conditions. Evaluation of the program was conducted at the health care system level. To examine the impact of the incentive payments on annual health care costs and hospital utilization patterns in British Columbia. The study used Ministry of Health administrative data for Fiscal Year 2010-2011 for patients with diabetes, congestive heart failure, chronic obstructive pulmonary disease, and/or hypertension. In each disease group, cost and utilization were compared across patients who did, and did not, receive incentive-based care. Health care costs (eg, primary care, hospital) and utilization measures (eg, hospital days, readmissions). After controlling for patients' age, sex, service needs level, and continuity of care (defined as attachment to a general practice), the incentives reduced the net annual health care costs, in Canadian dollars, for patients with hypertension (by approximately Can$308 per patient), chronic obstructive pulmonary disease (by Can$496), and congestive heart failure (by Can$96), but not diabetes (incentives cost about Can$148 more per patient). The incentives were also associated with fewer hospital days, fewer admissions and readmissions, and shorter lengths of hospital stays for all 4 groups. Although the available literature on pay for performance shows mixed results, we showed that the funding model used in British Columbia using incentive payments for primary care might reduce health care costs and hospital utilization.

  19. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  20. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  1. Measuring and modelling concurrency

    Science.gov (United States)

    Sawers, Larry

    2013-01-01

    This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships): measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction) at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case. PMID:23406964

  2. Utility of a patient-reported outcome in measuring functional impairment during autologous stem cell transplant in patients with multiple myeloma.

    Science.gov (United States)

    Shah, Nina; Shi, Qiuling; Giralt, Sergio; Williams, Loretta; Bashir, Qaiser; Qazilbash, Muzaffar; Champlin, Richard E; Cleeland, Charles S; Wang, Xin Shelley

    2018-04-01

    We aimed to determine the utility of a patient-reported outcome (PRO) as it relates to patient performed testing (PPT) for measuring functional status in multiple myeloma patients after autologous hematopoietic stem cell transplantation (auto-HCT). Symptom interference on walking (a PRO) was measured by the MD Anderson Symptom Inventory (MDASI). PPT was assessed via 6-min walk test (6MWT). Mixed effects modeling was used to examine (1) the longitudinal relationship between the MDASI score and 6MWT distance and (2) the MDASI scores between patients who did or did not complete the 6WMT. Receiver operating characteristic (ROC) curve analysis was performed to quantify the construct validity of the PRO by differentiating performance status. Seventy-nine patients were included. Mean 6MWT distance significantly correlated with MDASI-walking interference score (PRO) over the first month of auto-HCT (est = 6.09, p = 0.006). There was a significantly higher completion rate for MDASI versus 6MWT at each time point (p Patients who completed the 6MWT reported less interference on walking during the study period (est = 1.61, p patients undergoing auto-HCT. As patients with poorer functional status during therapy are less likely to complete PPT, this PRO may offer a more practical quantitative measure of functioning in patients.

  3. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  4. In-depth Analysis of Pattern of Occupational Injuries and Utilization of Safety Measures among Workers of Railway Wagon Repair Workshop in Jhansi (U.P.).

    Science.gov (United States)

    Gupta, Shubhanshu; Malhotra, Anil K; Verma, Santosh K; Yadav, Rashmi

    2017-01-01

    Occupational injuries constitute a global health challenge, yet they receive comparatively modest scientific attention. Pattern of occupational injuries and its safety precautions among wagon repair workers is an important health issue, especially in developing countries like India. To assess the pattern of occupational injuries and utilization of safety measures among railway wagon repair workshop workers in Jhansi (U.P.). Railway wagon repair workshop urban area, Jhansi (U.P). Occupation-based cross-sectional study. A cross-sectional study was conducted among 309 workers of railway workshop in Jhansi (U.P.) who were all injured during the study period of 1 year from July 2015 to June 2016. Baseline characteristics, pattern of occupational injuries, safety measures, and their availability to and utilization by the participants were assessed using a pretested structured questionnaire. Data obtained were collected and analyzed statistically by simple proportions and Chi-square test. The majority of studied workers aged between 38 and 47 years ( n = 93, 30.6%) followed by 28-37 years ( n = 79, 26%). Among the pattern of occupational injuries, laceration (28.7%) was most common followed by abrasion/scratch (21%). Safety shoes and hat were utilized 100% by all workers. Many of them had more than 5 years of experience ( n = 237, 78%). Age group, education level, and utilization of safety measures were significantly associated with pattern of occupational injuries in univariate analysis ( P safety measures is low among workers on railway wagon repair workshop, which highlights the importance of strengthening safety regulatory services toward this group of workers. Younger age group workers show a significant association with open wounds and surface wounds. As the education level of workers increases, the incidence of injuries decreases. Apart from shoes, hat, and gloves, regular utilization of other personal protective equipment was not seen.

  5. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  6. Bacterial carbon utilization in vertical subsurface flow constructed wetlands.

    Science.gov (United States)

    Tietz, Alexandra; Langergraber, Günter; Watzinger, Andrea; Haberl, Raimund; Kirschner, Alexander K T

    2008-03-01

    Subsurface vertical flow constructed wetlands with intermittent loading are considered as state of the art and can comply with stringent effluent requirements. It is usually assumed that microbial activity in the filter body of constructed wetlands, responsible for the removal of carbon and nitrogen, relies mainly on bacterially mediated transformations. However, little quantitative information is available on the distribution of bacterial biomass and production in the "black-box" constructed wetland. The spatial distribution of bacterial carbon utilization, based on bacterial (14)C-leucine incorporation measurements, was investigated for the filter body of planted and unplanted indoor pilot-scale constructed wetlands, as well as for a planted outdoor constructed wetland. A simple mass-balance approach was applied to explain the bacterially catalysed organic matter degradation in this system by comparing estimated bacterial carbon utilization rates with simultaneously measured carbon reduction values. The pilot-scale constructed wetlands proved to be a suitable model system for investigating microbial carbon utilization in constructed wetlands. Under an ideal operating mode, the bulk of bacterial productivity occurred within the first 10cm of the filter body. Plants seemed to have no significant influence on productivity and biomass of bacteria, as well as on wastewater total organic carbon removal.

  7. Use of mathematical modeling in nuclear measurements projects

    International Nuclear Information System (INIS)

    Toubon, H.; Menaa, N.; Mirolo, L.; Ducoux, X.; Khalil, R. A.; Chany, P.; Devita, A.

    2011-01-01

    Mathematical modeling of nuclear measurement systems is not a new concept. The response of the measurement system is described using a pre-defined mathematical model that depends on a set of parameters. These parameters are determined using a limited set of experimental measurement points e.g. efficiency curve, dose rates... etc. The model that agrees with the few experimental points is called an experimentally validated model. Once these models have been validated, we use mathematical interpolation to find the parameters of interest. Sometimes, when measurements are not practical or are impossible extrapolation is implemented but with care. CANBERRA has been extensively using mathematical modeling for the design and calibration of large and sophisticated systems to create and optimize designs that would be prohibitively expensive with only experimental tools. The case studies that will be presented here are primarily performed with MCNP, CANBERRA's MERCURAD/PASCALYS and ISOCS (In Situ Object Counting Software). For benchmarking purposes, both Monte Carlo and ray-tracing based codes are inter-compared to show models consistency and add a degree of reliability to modeling results. (authors)

  8. Measures to remove impediments to better utilization. Renewable energy sources

    International Nuclear Information System (INIS)

    Diekmann, J.; Eichelbroenner, M.; Langniss, O.

    1997-01-01

    The utilization of renewable energy sources meets with a number of obstacles created in particular by economic framework conditions, regulatory provisions, lengthy administrative procedures, insufficient information, and to some part also to the reluctance of bankers and utilities. This is why an action programme was put underway by the Forum fuer Zukunftsenergien, together with the Berlin-based DIW (German economic research institute) and the Stuttgart-based DLR (German aerospace research institute), financed from public funds of the Federal Ministry of Economics. Under this programme, almost 900 operators of systems for electricity generation from wind power, hydropower, biomass, ambient heat, solar thermal energy and by photovoltaic conversion have been interviewed. Based on the information obtained, the article reveals the existing impediments and proposed action for overcoming the obstacles. (orig.) [de

  9. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.

  10. Utilizing Multidimensional Measures of Race in Education Research: The Case of Teacher Perceptions.

    Science.gov (United States)

    Irizarry, Yasmiyn

    2015-10-01

    Education scholarship on race using quantitative data analysis consists largely of studies on the black-white dichotomy, and more recently, on the experiences of student within conventional racial/ethnic categories (white, Hispanic/Latina/o, Asian, black). Despite substantial shifts in the racial and ethnic composition of American children, studies continue to overlook the diverse racialized experiences for students of Asian and Latina/o descent, the racialization of immigration status, and the educational experiences of Native American students. This study provides one possible strategy for developing multidimensional measures of race using large-scale datasets and demonstrates the utility of multidimensional measures for examining educational inequality, using teacher perceptions of student behavior as a case in point. With data from the first grade wave of the Early Childhood Longitudinal Study, Kindergarten Cohort of 1998-1999, I examine differences in teacher ratings of Externalizing Problem Behaviors and Approaches to Learning across fourteen racialized subgroups at the intersections of race, ethnicity, and immigrant status. Results show substantial subgroup variation in teacher perceptions of problem and learning behaviors, while also highlighting key points of divergence and convergence within conventional racial/ethnic categories.

  11. Health care utilization

    DEFF Research Database (Denmark)

    Jacobsen, Christian Bøtcher; Andersen, Lotte Bøgh; Serritzlew, Søren

    An important task in governing health services is to control costs. The literatures on both costcontainment and supplier induced demand focus on the effects of economic incentives on health care costs, but insights from these literatures have never been integrated. This paper asks how economic cost...... containment measures affect the utilization of health services, and how these measures interact with the number of patients per provider. Based on very valid register data, this is investigated for 9.556 Danish physiotherapists between 2001 and 2008. We find that higher (relative) fees for a given service...... make health professionals provide more of this service to each patient, but that lower user payment (unexpectedly) does not necessarily mean higher total cost or a stronger association between the number of patients per supplier and the health care utilization. This implies that incentives...

  12. The energy-efficiency business - Energy utility strategies

    International Nuclear Information System (INIS)

    Loebbe, S.

    2009-01-01

    This article takes a look at the energy-efficiency business and the advantages it offers. The author quotes that energy-efficiency can contribute to making savings in primary energy, minimise the economic impact of global warming, improve reliability of supply and protect the gross national product. The advantages of new products for the efficient use of energy are reviewed and the resulting advantages for power customers are noted. Also, possibilities for the positioning of electricity suppliers in the environmental niche is noted. The partial markets involved and estimates concerning the impact of energy-efficiency measures are reviewed. Climate protection, co-operation with energy agencies, consulting services and public relations aspects are also discussed. The prerequisites for successful marketing by the utilities are examined and new business models are discussed along with the clear strategies needed. The development from an electricity utility to a system-competence partner is reviewed

  13. Attofarad resolution capacitance-voltage measurement of nanometer scale field effect transistors utilizing ambient noise

    International Nuclear Information System (INIS)

    Gokirmak, Ali; Inaltekin, Hazer; Tiwari, Sandip

    2009-01-01

    A high resolution capacitance-voltage (C-V) characterization technique, enabling direct measurement of electronic properties at the nanoscale in devices such as nanowire field effect transistors (FETs) through the use of random fluctuations, is described. The minimum noise level required for achieving sub-aF (10 -18 F) resolution, the leveraging of stochastic resonance, and the effect of higher levels of noise are illustrated through simulations. The non-linear ΔC gate-source/drain -V gate response of FETs is utilized to determine the inversion layer capacitance (C inv ) and carrier mobility. The technique is demonstrated by extracting the carrier concentration and effective electron mobility in a nanoscale Si FET with C inv = 60 aF.

  14. Global Atmosphere Watch Workshop on Measurement-Model ...

    Science.gov (United States)

    The World Meteorological Organization’s (WMO) Global Atmosphere Watch (GAW) Programme coordinates high-quality observations of atmospheric composition from global to local scales with the aim to drive high-quality and high-impact science while co-producing a new generation of products and services. In line with this vision, GAW’s Scientific Advisory Group for Total Atmospheric Deposition (SAG-TAD) has a mandate to produce global maps of wet, dry and total atmospheric deposition for important atmospheric chemicals to enable research into biogeochemical cycles and assessments of ecosystem and human health effects. The most suitable scientific approach for this activity is the emerging technique of measurement-model fusion for total atmospheric deposition. This technique requires global-scale measurements of atmospheric trace gases, particles, precipitation composition and precipitation depth, as well as predictions of the same from global/regional chemical transport models. The fusion of measurement and model results requires data assimilation and mapping techniques. The objective of the GAW Workshop on Measurement-Model Fusion for Global Total Atmospheric Deposition (MMF-GTAD), an initiative of the SAG-TAD, was to review the state-of-the-science and explore the feasibility and methodology of producing, on a routine retrospective basis, global maps of atmospheric gas and aerosol concentrations as well as wet, dry and total deposition via measurement-model

  15. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  16. Measures for carbon dioxide problem and utilization of energy

    International Nuclear Information System (INIS)

    Kojima, Toshinori

    1992-01-01

    As global environment problems, there are water, expansion of deserts, weather, tropical forests, wild animals, ocean pollution, nuclear waste contamination, acid rain, ozone layer and so on, and population, foods, energy, and resources are the problems surrounding them. It is clear that these origins are attributed to the development and consumption largely dependent on the intention of developed countries and the population problem of developing countries. In this report, the discharge of carbon dioxide that causes greenhouse effect and its relation with energy are discussed. The increase of carbon dioxide concentration, its release from fossil fuel, the destruction of forests, the balance of carbon on the earth, the development of new energy such as solar energy, the transport of new energy, secondary energy system and the role of carbon dioxide, the transfer to low carbon fuel and the carbon reduction treatment of fuel, the utilization of unused energy and energy price, the efficiency of energy utilization, the heightening of efficiency of energy conversion, energy conservation and the breakaway from energy wasteful use culture, and the recovery, preservation and use of discharged carbon dioxide are described. (K.I.)

  17. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  18. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  19. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    Science.gov (United States)

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  20. Protein (multi-)location prediction: utilizing interdependencies via a generative model.

    Science.gov (United States)

    Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit

    2015-06-15

    Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.

  1. Using measurements for evaluation of black carbon modeling

    Directory of Open Access Journals (Sweden)

    S. Gilardoni

    2011-01-01

    Full Text Available The ever increasing use of air quality and climate model assessments to underpin economic, public health, and environmental policy decisions makes effective model evaluation critical. This paper discusses the properties of black carbon and light attenuation and absorption observations that are the key to a reliable evaluation of black carbon model and compares parametric and nonparametric statistical tools for the quantification of the agreement between models and observations. Black carbon concentrations are simulated with TM5/M7 global model from July 2002 to June 2003 at four remote sites (Alert, Jungfraujoch, Mace Head, and Trinidad Head and two regional background sites (Bondville and Ispra. Equivalent black carbon (EBC concentrations are calculated using light attenuation measurements from January 2000 to December 2005. Seasonal trends in the measurements are determined by fitting sinusoidal functions and the representativeness of the period simulated by the model is verified based on the scatter of the experimental values relative to the fit curves. When the resolution of the model grid is larger than 1° × 1°, it is recommended to verify that the measurement site is representative of the grid cell. For this purpose, equivalent black carbon measurements at Alert, Bondville and Trinidad Head are compared to light absorption and elemental carbon measurements performed at different sites inside the same model grid cells. Comparison of these equivalent black carbon and elemental carbon measurements indicates that uncertainties in black carbon optical properties can compromise the comparison between model and observations. During model evaluation it is important to examine the extent to which a model is able to simulate the variability in the observations over different integration periods as this will help to identify the most appropriate timescales. The agreement between model and observation is accurately described by the overlap of

  2. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  3. COMPLEAT (Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies): A planning tool for publicly owned electric utilities. [Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies (Compleat)

    Energy Technology Data Exchange (ETDEWEB)

    1990-09-01

    COMPLEAT takes its name, as an acronym, from Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies. It is an electric utility planning model designed for use principally by publicly owned electric utilities and agencies serving such utilities. As a model, COMPLEAT is significantly more full-featured and complex than called out in APPA's original plan and proposal to DOE. The additional complexity grew out of a series of discussions early in the development schedule, in which it became clear to APPA staff and advisors that the simplicity characterizing the original plan, while highly desirable in terms of utility applications, was not achievable if practical utility problems were to be addressed. The project teams settled on Energy 20/20, an existing model developed by Dr. George Backus of Policy Assessment Associates, as the best candidate for the kinds of modifications and extensions that would be required. The remainder of the project effort was devoted to designing specific input data files, output files, and user screens and to writing and testing the compute programs that would properly implement the desired features around Energy 20/20 as a core program. This report presents in outline form, the features and user interface of COMPLEAT.

  4. New pricing approaches for bundled payments: Leveraging clinical standards and regional variations to target avoidable utilization.

    Science.gov (United States)

    Hellsten, Erik; Chu, Scally; Crump, R Trafford; Yu, Kevin; Sutherland, Jason M

    2016-03-01

    Develop pricing models for bundled payments that draw inputs from clinician-defined best practice standards and benchmarks set from regional variations in utilization. Health care utilization and claims data for a cohort of incident Ontario ischemic and hemorrhagic stroke episodes. Episodes of care are created by linking incident stroke hospitalizations with subsequent health service utilization across multiple datasets. Costs are estimated for episodes of care and constituent service components using setting-specific case mix methodologies and provincial fee schedules. Costs are estimated for five areas of potentially avoidable utilization, derived from best practice standards set by an expert panel of stroke clinicians. Alternative approaches for setting normative prices for stroke episodes are developed using measures of potentially avoidable utilization and benchmarks established by the best performing regions. There are wide regional variations in the utilization of different health services within episodes of stroke care. Reconciling the best practice standards with regional utilization identifies significant amounts of potentially avoidable utilization. Normative pricing models for stroke episodes result in increasingly aggressive redistributions of funding. Bundled payment pilots to date have been based on the costs of historical service patterns, which effectively 'bake in' unwarranted and inefficient variations in utilization. This study demonstrates the feasibility of novel clinically informed episode pricing approaches that leverage these variations to target reductions in potentially avoidable utilization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  6. A gentle introduction to Rasch measurement models for metrologists

    International Nuclear Information System (INIS)

    Mari, Luca; Wilson, Mark

    2013-01-01

    The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models

  7. Coevaporation of Y, BaF2, and Cu utilizing a quadrupole mass spectrometer as a rate measuring probe

    International Nuclear Information System (INIS)

    Hudner, J.; Oestling, M.; Ohlsen, H.; Stolt, L.

    1991-01-01

    An ultrahigh vacuum coevaporator equipped with three sources for preparation of Y--BaF 2 --Cu--O thin films is described. Evaporation rates of Y, BaF 2 , and Cu were controlled using a quadrupole mass spectrometer operating in a multiplexed mode. To evaluate the method depositions have been performed using different source configurations and evaporation rates. Utilizing Rutherford backscattering spectrometry absolute values of the actual evaporation rates were determined. It was observed that the mass-spectrometer sensitivity is highest for Y, followed by BaF 2 (BaF + is the measured ion) and Cu. A partial pressure of oxygen during evaporation of Y, BaF 2 , and Cu affected mainly the rate of Y. It is shown that the mass spectrometer can be utilized to precisely control the film composition

  8. Utility and translatability of mathematical modeling, cell culture and small and large animal models in magnetic nanoparticle hyperthermia cancer treatment research

    Science.gov (United States)

    Hoopes, P. J.; Petryk, Alicia A.; Misra, Adwiteeya; Kastner, Elliot J.; Pearce, John A.; Ryan, Thomas P.

    2015-03-01

    For more than 50 years, hyperthermia-based cancer researchers have utilized mathematical models, cell culture studies and animal models to better understand, develop and validate potential new treatments. It has been, and remains, unclear how and to what degree these research techniques depend on, complement and, ultimately, translate accurately to a successful clinical treatment. In the past, when mathematical models have not proven accurate in a clinical treatment situation, the initiating quantitative scientists (engineers, mathematicians and physicists) have tended to believe the biomedical parameters provided to them were inaccurately determined or reported. In a similar manner, experienced biomedical scientists often tend to question the value of mathematical models and cell culture results since those data typically lack the level of biologic and medical variability and complexity that are essential to accurately study and predict complex diseases and subsequent treatments. Such quantitative and biomedical interdependence, variability, diversity and promise have never been greater than they are within magnetic nanoparticle hyperthermia cancer treatment. The use of hyperthermia to treat cancer is well studied and has utilized numerous delivery techniques, including microwaves, radio frequency, focused ultrasound, induction heating, infrared radiation, warmed perfusion liquids (combined with chemotherapy), and, recently, metallic nanoparticles (NP) activated by near infrared radiation (NIR) and alternating magnetic field (AMF) based platforms. The goal of this paper is to use proven concepts and current research to address the potential pathobiology, modeling and quantification of the effects of treatment as pertaining to the similarities and differences in energy delivered by known external delivery techniques and iron oxide nanoparticles.

  9. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  10. Utilizing remote sensing data for modeling water and heat regimes of the Black Earth Region territory of the European Russia

    Science.gov (United States)

    Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Uspensky, Sergey

    2014-05-01

    At present physical-mathematical modeling processes of water and heat exchange between vegetation covered land surfaces and atmosphere is the most appropriate method to describe peculiarities of water and heat regime formation for large territories. The developed model of such processes (Land Surface Model, LSM) is intended for calculation evaporation, transpiration by vegetation, soil water content and other water and heat regime characteristics, as well as distributions of the soil temperature and humidity in depth utilizing remote sensing data from satellites on land surface and meteorological conditions. The model parameters and input variables are the soil and vegetation characteristics and the meteorological characteristics, correspondingly. Their values have been determined from ground-based observations or satellite-based measurements by radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/Meteosat-9, -10. The case study has been carried out for the part of the agricultural Central Black Earth region with coordinates 49.5 deg. - 54 deg. N, 31 deg. - 43 deg. E and a total area of 227,300 km2 located in the steppe-forest zone of the European Russia for years 2009-2012 vegetation seasons. From AVHRR data there have been derived the estimates of three types of land surface temperature (LST): land surface skin temperature Tsg, air-foliage temperature Ta and efficient radiation temperature Ts.eff, emissivity E, normalized vegetation index NDVI, vegetation cover fraction B, leaf area index LAI, cloudiness and precipitation. From MODIS data the estimates of LST Tls, E, NDVI and LAI have been obtained. The SEVIRI data have been used to build the estimates of Tls, Ta, E, LAI and precipitation. Previously developed method and technology of above AVHRR-derived estimates have been improved and adapted to the study area. To check the reliability of the Ts.eff and Ta estimations for named seasons the error statistics of their definitions has been analyzed through

  11. Model of sustainable utilization of organic solids waste in Cundinamarca, Colombia

    Directory of Open Access Journals (Sweden)

    Solanyi Castañeda Torres

    2017-05-01

    Full Text Available Introduction: This article considers a proposal of a model of use of organic solids waste for the department of Cundinamarca, which responds to the need for a tool to support decision-making for the planning and management of organic solids waste. Objective: To perform an approximation of a conceptual technical and mathematician optimization model to support decision-making in order to minimize environmental impacts. Materials and methods: A descriptive study was applied due to the fact that some fundamental characteristics of the studied homogeneous phenomenon are presented and it is also considered to be quasi experimental. The calculation of the model for plants of the department is based on three axes (environmental, economic and social, that are present in the general equation of optimization. Results: A model of harnessing organic solids waste in the techniques of biological treatment of composting aerobic and worm cultivation is obtained, optimizing the system with the emissions savings of greenhouse gases spread into the atmosphere, and in the reduction of the overall cost of final disposal of organic solids waste in sanitary landfill. Based on the economic principle of utility that determines the environmental feasibility and sustainability in the plants of harnessing organic solids waste to the department, organic fertilizers such as compost and humus capture carbon and nitrogen that reduce the tons of CO2.

  12. Internet advertising effectiveness measurement model

    OpenAIRE

    Marcinkevičiūtė, Milda

    2007-01-01

    The research object of the master thesis is internet advertising effectiveness measurement. The goal of the work is after making theoretical studies of internet advertising effectiveness measurement (theoretical articles, practical researches and cetera), formulate the conceptual IAEM model and examine it empirically. The main tasks of the work are: to analyze internet advertising, it’s features, purposes, spread formats, functions, advantages and disadvantages; present the effectiveness of i...

  13. The Utility of the Prototype/Willingness Model in Predicting Alcohol Use among North American Indigenous Adolescents

    Science.gov (United States)

    Armenta, Brian E.; Hautala, Dane S.; Whitbeck, Les B.

    2015-01-01

    In the present study, we considered the utility of the prototype/willingness model in predicting alcohol use among North-American Indigenous adolescents. Specifically, using longitudinal data, we examined the associations among subjective drinking norms, positive drinker prototypes, drinking expectations (as a proxy of drinking willingness), and…

  14. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    Science.gov (United States)

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  15. Modelling the effects of road traffic safety measures.

    Science.gov (United States)

    Lu, Meng

    2006-05-01

    A model is presented for assessing the effects of traffic safety measures, based on a breakdown of the process in underlying components of traffic safety (risk and consequence), and five (speed and conflict related) variables that influence these components, and are influenced by traffic safety measures. The relationships between measures, variables and components are modelled as coefficients. The focus is on probabilities rather than historical statistics, although in practice statistics may be needed to find values for the coefficients. The model may in general contribute to improve insight in the mechanisms between traffic safety measures and their safety effects. More specifically it allows comparative analysis of different types of measures by defining an effectiveness index, based on the coefficients. This index can be used to estimate absolute effects of advanced driver assistance systems (ADAS) related measures from absolute effects of substitutional (in terms of safety effects) infrastructure measures.

  16. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.

  17. Predictors of Adolescent Health Care Utilization

    Science.gov (United States)

    Vingilis, Evelyn; Wade, Terrance; Seeley, Jane

    2007-01-01

    This study, using Andersen's health care utilization model, examined how predisposing characteristics, enabling resources, need, personal health practices, and psychological factors influence health care utilization using a nationally representative, longitudinal sample of Canadian adolescents. Second, this study examined whether this process…

  18. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  19. Development of Nonlinear Flight Mechanical Model of High Aspect Ratio Light Utility Aircraft

    Science.gov (United States)

    Bahri, S.; Sasongko, R. A.

    2018-04-01

    The implementation of Flight Control Law (FCL) for Aircraft Electronic Flight Control System (EFCS) aims to reduce pilot workload, while can also enhance the control performance during missions that require long endurance flight and high accuracy maneuver. In the development of FCL, a quantitative representation of the aircraft dynamics is needed for describing the aircraft dynamics characteristic and for becoming the basis of the FCL design. Hence, a 6 Degree of Freedom nonlinear model of a light utility aircraft dynamics, also called the nonlinear Flight Mechanical Model (FMM), is constructed. This paper shows the construction of FMM from mathematical formulation, the architecture design of FMM, the trimming process and simulations. The verification of FMM is done by analysis of aircraft behaviour in selected trimmed conditions.

  20. VALUING BENEFITS FROM WATER QUALITY IMPROVEMENTS USING KUHN TUCKER MODEL - A COMPARATIVE ANALYSIS ON UTILITY FUNCTIONAL FORMS-

    Science.gov (United States)

    Okuyama, Tadahiro

    Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.

  1. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  2. Perceived utility of emotion: the structure and construct validity of the Perceived Affect Utility Scale in a cross-ethnic sample.

    Science.gov (United States)

    Chow, Philip I; Berenbaum, Howard

    2012-01-01

    This study introduces a new measure of the perceived utility of emotion, which is the degree to which emotions are perceived to be useful in achieving goals. In this study, we administered this new measure, the Perceived Affect Utility Scale (PAUSe), to a sample of 142 European American and 156 East Asian American college students. Confirmatory factor analyses provided support for a new, culturally informed parsing of emotion and for perceived utility of emotion to be distinguishable from ideal affect, a related but separate construct. Next, we explored the potential importance of perceived utility of emotion in cultural research. Through path analyses, we found that: (a) culturally relevant variables (e.g., independence) played a mediating role in the link between ethnic group and perceived utility of emotion; and (b) perceived utility of emotion played a mediating role in the link between culturally relevant variables and ideal affect. In particular, perceived utility of self-centered emotions (e.g., pride) was found to be associated with independence and ideal affect of those same emotions. In contrast, perceived utility of other-centered emotions (e.g., appreciation) was found to be associated with interdependence, dutifulness/self-discipline, and ideal affect of those same emotions. Implications for perceived utility of emotion in understanding cultural factors are discussed.

  3. Utility, games, and narratives

    OpenAIRE

    Fioretti, Guido

    2009-01-01

    This paper provides a general overview of theories and tools to model individual and collective decision-making. In particular, stress is laid on the interaction of several decision-makers. A substantial part of this paper is devoted to utility maximization and its application to collective decision-making, Game Theory. However, the pitfalls of utility maximization are thoroughly discussed, and the radically alternative approach of viewing decision-making as constructing narratives is pre...

  4. From Practice to Evidence in Child Welfare: Model Specification and Fidelity Measurement of Team Decisionmaking.

    Science.gov (United States)

    Bearman, Sarah Kate; Garland, Ann F; Schoenwald, Sonja K

    2014-04-01

    Fidelity measurement methods have traditionally been used to develop and evaluate the effects of psychosocial treatments and, more recently, their implementation in practice. The fidelity measurement process can also be used to operationally define and specify components of emerging but untested practices outside the realm of conventional treatment. Achieving optimal fidelity measurement effectiveness (scientific validity and reliability) and efficiency (feasibility and relevance in routine care contexts) is challenging. The purpose of this paper is to identify strategies to address these challenges in child welfare system practices. To illustrate the challenges, and operational steps to address them, we present a case example using the "Team Decisionmaking" (TDM; Annie E. Casey Foundation) intervention. This intervention has potential utility for decreasing initial entry into and time spent in foster care and increasing rates of reunification and relative care. While promising, the model requires rigorous research to refine knowledge regarding the relationship between intervention components and outcomes-research that requires fidelity measurement. The intent of this paper is to illustrate how potentially generalizable steps for developing effective and efficient fidelity measurement methods can be used to more clearly define and test the effects of child welfare system practices.

  5. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  6. Women's autonomy and maternal healthcare service utilization in Ethiopia.

    Science.gov (United States)

    Tiruneh, Fentanesh Nibret; Chuang, Kun-Yang; Chuang, Ying-Chih

    2017-11-13

    Most previous studies on healthcare service utilization in low-income countries have not used a multilevel study design to address the importance of community-level women's autonomy. We assessed whether women's autonomy, measured at both individual and community levels, is associated with maternal healthcare service utilization in Ethiopia. We analyzed data from the 2005 and 2011 Ethiopia Demographic and Health Surveys (N = 6058 and 7043, respectively) for measuring women's decision-making power and permissive gender norms associated with wife beating. We used Spearman's correlation and the chi-squared test for bivariate analyses and constructed generalized estimating equation logistic regression models to analyze the associations between women's autonomy indicators and maternal healthcare service utilization with control for other socioeconomic characteristics. Our multivariate analysis showed that women living in communities with a higher percentage of opposing attitudes toward wife beating were more likely to use all three types of maternal healthcare services in 2011 (adjusted odds ratios = 1.21, 1.23, and 1.18 for four or more antenatal care visits, health facility delivery, and postnatal care visits, respectively). In 2005, the adjusted odds ratios were 1.16 and 1.17 for four or more antenatal care visits and health facility delivery, respectively. In 2011, the percentage of women in the community with high decision-making power was positively associated with the likelihood of four or more antenatal care visits (adjusted odds ratio = 1.14). The association of individual-level autonomy on maternal healthcare service utilization was less profound after we controlled for other individual-level and community-level characteristics. Our study shows that women's autonomy was positively associated with maternal healthcare service utilization in Ethiopia. We suggest addressing woman empowerment in national policies and programs would be the optimal solution.

  7. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  8. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  9. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  10. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement.

    Science.gov (United States)

    Wang, Yung-Cheng; Shyu, Lih-Horng; Chang, Chung-Ping

    2010-01-01

    The optical structure of general commercial interferometers, e.g., the Michelson interferometers, is based on a non-common optical path. Such interferometers suffer from environmental effects because of the different phase changes induced in different optical paths and consequently the measurement precision will be significantly influenced by tiny variations of the environmental conditions. Fabry-Perot interferometers, which feature common optical paths, are insensitive to environmental disturbances. That would be advantageous for precision displacement measurements under ordinary environmental conditions. To verify and analyze this influence, displacement measurements with the two types of interferometers, i.e., a self-fabricated Fabry-Perot interferometer and a commercial Michelson interferometer, have been performed and compared under various environmental disturbance scenarios. Under several test conditions, the self-fabricated Fabry-Perot interferometer was obviously less sensitive to environmental disturbances than a commercial Michelson interferometer. Experimental results have shown that induced errors from environmental disturbances in a Fabry-Perot interferometer are one fifth of those in a Michelson interferometer. This has proved that an interferometer with the common optical path structure will be much more independent of environmental disturbances than those with a non-common optical path structure. It would be beneficial for the solution of interferometers utilized for precision displacement measurements in ordinary measurement environments.

  11. Practical utilization of modeling and simulation in laboratory process waste assessments

    International Nuclear Information System (INIS)

    Lyttle, T.W.; Smith, D.M.; Weinrach, J.B.; Burns, M.L.

    1993-01-01

    At Los Alamos National Laboratory (LANL), facility waste streams tend to be small but highly diverse. Initial characterization of such waste streams is difficult in part due to a lack of tools to assist the waste generators in completing such assessments. A methodology has been developed at LANL to allow process knowledgeable field personnel to develop baseline waste generation assessments and to evaluate potential waste minimization technology. This process waste assessment (PWA) system is an application constructed within the process modeling system. The Process Modeling System (PMS) is an object-oriented, mass balance-based, discrete-event simulation using the common LISP object system (CLOS). Analytical capabilities supported within the PWA system include: complete mass balance specifications, historical characterization of selected waste streams and generation of facility profiles for materials consumption, resource utilization and worker exposure. Anticipated development activities include provisions for a best available technologies (BAT) database and integration with the LANL facilities management Geographic Information System (GIS). The environments used to develop these assessment tools will be discussed in addition to a review of initial implementation results

  12. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  13. [Comprehensive evaluation of county-level construction land intensive utility in Guangdong province: a case study for Zijin County].

    Science.gov (United States)

    Zhang, Jun-Ping; Hu, Yue-Ming; Tian, Yuan; Wang, Lu; Liu, Su-Ping

    2010-02-01

    Based on the weights and membership values of evaluation indices, a measurement model of construction land intensive utility in Zijin County of Guangdong Province was established, and the basic principles of the greatest compatible class were adopted to classify the intensive utility levels of the construction land based on fuzzy recognition. Additionally, the intensive utility potential of construction land in Zijin County in 2005 was calculated by comparing the per capita construction land in towns, independent industrial and mining areas, and rural residential areas with the latest national land use standards for planning of town launched in 2007. The predicted value of the model was 0.421, suggesting that the construction land utility in Zijin County was still low- effective and extensive. Theoretically, the construction land area could be decreased by 555.69-2197.69 hm2, which meant there was a great potential in the intensive utility of construction land in the county.

  14. Energy Utilization Evaluation of Carbon Performance in Public Projects by FAHP and Cloud Model

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-07-01

    Full Text Available With the low-carbon economy advocated all over the world, how to use energy reasonably and efficiently in public projects has become a major issue. It has brought many open questions, including which method is more reasonable in evaluating the energy utilization of carbon performance in public projects when the evaluation information is fuzzy; whether an indicator system can be constructed; and which indicators have more impact on carbon performance. This article aims to solve these problems. We propose a new carbon performance evaluation system for energy utilization based on project processes (design, construction, and operation. Fuzzy Analytic Hierarchy Process (FAHP is used to accumulate the indicator weights and cloud model is incorporated when the indicator value is fuzzy. Finally, we apply our indicator system to a case study of the Xiangjiang River project in China, which demonstrates the applicability and efficiency of our method.

  15. SEE Action Guide for States: Evaluation, Measurement, and Verification Frameworks$-$Guidance for Energy Efficiency Portfolios Funded by Utility Customers

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [Dept. of Energy (DOE), Washington DC (United States); Dietsch, Niko [US Environmental Protection Agency (EPA), Cincinnati, OH (United States)

    2018-01-01

    This guide describes frameworks for evaluation, measurement, and verification (EM&V) of utility customer–funded energy efficiency programs. The authors reviewed multiple frameworks across the United States and gathered input from experts to prepare this guide. This guide provides the reader with both the contents of an EM&V framework, along with the processes used to develop and update these frameworks.

  16. Anti-Money Laundry regulation and Crime: A two-period model of money-in-the-utility-function

    OpenAIRE

    Fanta, F; Mohsin, H

    2010-01-01

    The paper presents a two period model with two types of money i.e. dirty and cleans (legal) money in utility function. Clean money is earned from working in legal sector and dirty from illegal sector. Our two-two period model reveals that an increase in labor wage in legal sector unambiguously decease the labor hours allocated for illegal sector by increasing the opportunity cost for illegal activities. However, the crime-reducing impact of anti-money laundry regulation and the probability of...

  17. Utility of silicone filtering for diffusive model CO2 sensors in field experiments

    Directory of Open Access Journals (Sweden)

    Shinjiro Ohkubo

    2013-05-01

    Full Text Available Installing a diffusive model CO2 sensor in the soil is a direct and useful method to observe the time variation of gas CO2 concentration in soil. Furthermore, it requires no bulky measurement system. A hydrophobic silicone filter prevents water infiltration. Therefore, a sensor whose detection element is covered with a silicone filter can be durable in the field even when experiencing inundation (e.g. farmland with snow melting, wetland with varying water level. The utility of a diffusive model of CO2 sensor covered with silicone filter was examined in laboratory and field experiments. Applying the silicone filter delays the response to change in ambient CO2 concentration, which results from lower gas permeability than those of other conventionally used filters made of materials, such as polytetrafluoroethylene. Theoretically, apart from the precision of the sensor itself, diurnal variation of soil gas CO2 concentration is calculable from obtained series of data with a silicone-covered sensor with negligible error. The error is estimated at approximately 1% of the diurnal amplitude in most cases of a 10-min logging interval. Drastic changes that occur, such as those of a rainfall event, cause a larger gap separating calculated and real values. However, the proportion of this gap to the extent of the drastic increase was extremely small (0.43% for a 10-min logging interval. For accurate estimation, a smoothly varied data series must be prepared as input data. Using a moving average or applying a fitting curve can be useful when using a sensor or data logger with low resolution. Estimating the gas permeability coefficient is crucial for calculation. The gas permeability coefficient can be estimated through laboratory experiments. This study revealed the possibility of evaluating the time variation of soil gas CO2 concentration by installing a diffusive model of silicone-covered sensor in an inundated field.

  18. Measuring and modeling water imbibition into tuff

    International Nuclear Information System (INIS)

    Peters, R.R.; Klavetter, E.A.; George, J.T.; Gauthier, J.H.

    1986-01-01

    Yucca Mountain (Nevada) is being investigated as a potential site for a high-level-radioactive-waste repository. The site combines a partially saturated hydrologic system and a stratigraphy of fractured, welded and nonwelded tuffs. The long time scale for site hydrologic phenomena makes their direct measurement prohibitive. Also, modeling is difficult because the tuffs exhibit widely varying, and often highly nonlinear hydrologic properties. To increase a basic understanding of both the hydrologic properties of tuffs and the modeling of flow in partially saturated regimes, the following tasks were performed, and the results are reported: (1) Laboratory Experiment: Water imbibition into a cylinder of tuff (taken from Yucca Mountain drill core) was measured by immersing one end of a dry sample in water and noting its weight at various times. The flow of water was approximately one-dimensional, filling the sample from bottom to top. (2) Computer Simulation: The experiment was modeled using TOSPAC (a one-dimensional, finite-difference computer program for simulating water flow in partially saturated, fractured, layered media) with data currently considered for use in site-scale modeling of a repository in Yucca Mountain. The measurements and the results of the modeling are compared. Conclusions are drawn with respect to the accuracy of modeling transient flow in a partially saturated, porous medium using a one-dimensional model and currently available hydrologic-property data

  19. Pharmacokinetic/pharmacodynamic modeling of cardiac toxicity in human acute overdoses: utility and limitations.

    Science.gov (United States)

    Mégarbane, Bruno; Aslani, Arsia Amir; Deye, Nicolas; Baud, Frédéric J

    2008-05-01

    Hypotension, cardiac failure, QT interval prolongation, dysrhythmias, and conduction disturbances are common complications of overdoses with cardiotoxicants. Pharmacokinetic/pharmacodynamic (PK/PD) relationships are useful to assess diagnosis, prognosis, and treatment efficacy in acute poisonings. To review the utility and limits of PK/PD studies of cardiac toxicity. Discussion of various models, mainly those obtained in digitalis, cyanide, venlafaxine and citalopram poisonings. A sigmoidal E(max) model appears adequate to represent the PK/PD relationships in cardiotoxic poisonings. PK/PD correlations investigate the discrepancies between the time course of the effect magnitude and its evolving concentrations. They may help in understanding the mechanisms of occurrence as well as disappearance of a cardiotoxic effect. When data are sparse, population-based PK/PD modeling using computer-intensive algorithms is helpful to estimate population mean values of PK parameters as well as their individual variability. Further PK/PD studies are needed in medical toxicology to allow understanding of the meaning of blood toxicant concentration in acute poisonings and thus improve management.

  20. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  1. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    Science.gov (United States)

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Ionic Liquids for Utilization of Waste Heat from Distributed Power Generation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Joan F. Brennecke; Mihir Sen; Edward J. Maginn; Samuel Paolucci; Mark A. Stadtherr; Peter T. Disser; Mike Zdyb

    2009-01-11

    The objective of this research project was the development of ionic liquids to capture and utilize waste heat from distributed power generation systems. Ionic Liquids (ILs) are organic salts that are liquid at room temperature and they have the potential to make fundamental and far-reaching changes in the way we use energy. In particular, the focus of this project was fundamental research on the potential use of IL/CO2 mixtures in absorption-refrigeration systems. Such systems can provide cooling by utilizing waste heat from various sources, including distributed power generation. The basic objectives of the research were to design and synthesize ILs appropriate for the task, to measure and model thermophysical properties and phase behavior of ILs and IL/CO2 mixtures, and to model the performance of IL/CO2 absorption-refrigeration systems.

  3. The determination of chromium-50 in human blood and its utilization for blood volume measurements

    International Nuclear Information System (INIS)

    Zeisler, R.; Young, I.

    1986-01-01

    Possible relationships between insufficient blood volume increases during pregnancy and infant mortality could be established with an adequate measurement procedure. An accurate and precise technique for blood volume measurements has been found in the isotope dilution technique using chromium-51 as a label for red blood cells. However, in a study involving pregnant women, only stable isotopes can be used for labeling. Stable chromium-50 can be determined in total blood samples before and after dilution experiments by neutron activation analysis (NAA) or mass spectrometry. However, both techniques may be affected by insufficient sensitivity and contamination problems at the inherently low natural chromium concentrations to be measured in the blood. NAA procedures involving irradiations with highly thermalized neutrons at a fluence rate of 2x10 13 n/cm 2 xs and low background gamma spectrometry are applied to the analysis of total blood. Natural levels of chromium-50 in human and animal blood have been found to be <0.1 ng/mL; i.e., total chromium levels of <3 ng/mL. Based on the NAA procedure, a new approach to the blood volume measurement via chromium-50 isotope dilution has been developed which utilizes the ratio of the induced activities of chromium-51 to the iron-59 in three blood samples taken from each individual, namely blank, labeled and diluted labeled blood. (author)

  4. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  5. RTNS-II utilization plan

    Energy Technology Data Exchange (ETDEWEB)

    Zwilsky, Klaus M.

    1978-09-01

    This plan describes a general program for the effective utilization of this resource by the fusion materials community. Because its flux is low relative to levels expected in commercial fusion reactors, the RINS-II is not expected to produce data of direct engineering significance (with some exceptions). Rather, it will be used chiefly to aid in the development of models of high energy neutron effects. Such models are needed in projecting engineering data obtained in high flux fission reactors to the fusion environment. Fission reactors, because of their relatively soft neutron spectra, cannot produce the high ratio of transmutations to displacements (except in an important special case) or the high energy recoil atoms appropriate to fusion reactors utilizing the D-T reaction.

  6. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    Science.gov (United States)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  7. Striatal dopamine transporter, regional cerebral blood flow and glucose utilization in MPTP-induced parkinson disease mice model

    International Nuclear Information System (INIS)

    Gao Yunchao; Wu Chunying; Xiang Jingde; Lin Xiangtong; Zhu Huiqing

    2005-01-01

    Objective: To explore the variation of regional cerebral blood flow (rCBF), glucose utilization as well as the neurotoxic effect on dopaminergic neurons induced by neurotoxin 1-methy-4-phenyl-1,2,3,6-tetrahy-dropyridine (MPTP). Methods: Eight-week old male C57BL/6 mice were given a total dose of 0-80 mg/kg MPTP intraperitoneally. Ten days later the mice were sacrificed for tyrosine hydroxylase (TH)-immunopositive cell count- ing in substantia nigra using SP immunohistochemistry. Vivo autoradiography was employed to measure striatal do- pamine transporter (DAT) loss, rCBF and glucose utilization in striatum and thalamus. Results: The extents of DAT depletion and TH-immunopositive cell loss were positively correlated (r=0.998, P O.2), while glucose utilization was only slightly reduced in caudate/putamen and thalamus by 3.0% and 5.4% in 80 mg/kg MPTP-treated mice (P<0.05). Conclusion: Significant dose-dependent relationship was in presence of MPTP induced dopaminergic neurons loss, changes of rCBF in caudate/putamen and thalamus were not significant, while the glucose utilization was slightly decreased in higher dose group. (authors)

  8. Solar radiation modeling and measurements for renewable energy applications: data and model quality

    International Nuclear Information System (INIS)

    Myers, Daryl R.

    2005-01-01

    Measurement and modeling of broadband and spectral terrestrial solar radiation is important for the evaluation and deployment of solar renewable energy systems. We discuss recent developments in the calibration of broadband solar radiometric instrumentation and improving broadband solar radiation measurement accuracy. An improved diffuse sky reference and radiometer calibration and characterization software for outdoor pyranometer calibrations are outlined. Several broadband solar radiation model approaches, including some developed at the National Renewable Energy Laboratory, for estimating direct beam, total hemispherical and diffuse sky radiation are briefly reviewed. The latter include the Bird clear sky model for global, direct beam, and diffuse terrestrial solar radiation; the Direct Insolation Simulation Code (DISC) for estimating direct beam radiation from global measurements; and the METSTAT (Meteorological and Statistical) and Climatological Solar Radiation (CSR) models that estimate solar radiation from meteorological data. We conclude that currently the best model uncertainties are representative of the uncertainty in measured data

  9. Solar radiation modeling and measurements for renewable energy applications: data and model quality

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D.R. [National Renewable Energy Laboratory, Golden, CO (United States)

    2005-07-01

    Measurement and modeling of broadband and spectral terrestrial solar radiation is important for the evaluation and deployment of solar renewable energy systems. We discuss recent developments in the calibration of broadband solar radiometric instrumentation and improving broadband solar radiation measurement accuracy. An improved diffuse sky reference and radiometer calibration and characterization software for outdoor pyranometer calibrations are outlined. Several broadband solar radiation model approaches, including some developed at the National Renewable Energy Laboratory, for estimating direct beam, total hemispherical and diffuse sky radiation are briefly reviewed. The latter include the Bird clear sky model for global, direct beam, and diffuse terrestrial solar radiation; the Direct Insolation Simulation Code (DISC) for estimating direct beam radiation from global measurements; and the METSTAT (Meteorological and Statistical) and Climatological Solar Radiation (CSR) models that estimate solar radiation from meteorological data. We conclude that currently the best model uncertainties are representative of the uncertainty in measured data. (author)

  10. Urea utilization in growing lambs. 7

    International Nuclear Information System (INIS)

    Ulbrich, M.

    1989-01-01

    The utilization quota of NPN and pure feed protein for body protein synthesis was calculated on the basis of N balance experiments with 15 N-labelled urea with the help of model concepts of a 3-pool model and its mathematical usage. In lambs weighing 13 kg the efficiency of amino acid and nucleic acid synthesis in the non-amino acid N pool was 64%. This results in a total utilization quota for NPN and pure protein in the ration of 40% and 60%, resp. Lambs weighing 27 kg showed an efficiency in amino acid and nucleic acid synthesis in the non-AA N pool of 77% and in the AA N pool of 60%. The total utilization quota of NPN was 47% and that of pure protein 56%. The pure protein in the ration was thus about twice as well utilized for total protein synthesis and for protein syntesis for crude protein retention as the NPN compounds in the ration. (author)

  11. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  12. Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER

    Science.gov (United States)

    Rauner, D.; Kurutz, U.; Fantz, U.

    2015-04-01

    As the negative hydrogen ion density nH- is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H- is measured directly, however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H- is produced in the plasma volume, laser photodetachment is applied as the standard method to measure nH-. The additional application of CRDS provides the possibility to directly obtain absolute values of nH-, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H- production and destruction processes. The modelled densities are adapted to the absolute measurements of nH- via CRDS, allowing to identify collisions of H- with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H- in the plasma volume at HOMER. Furthermore, the characteristic peak of nH- observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as ne determines the volume production rate via dissociative electron attachment to vibrationally excited hydrogen molecules.

  13. Effect of Larval Density on Food Utilization Efficiency of Tenebrio molitor (Coleoptera: Tenebrionidae).

    Science.gov (United States)

    Morales-Ramos, Juan A; Rojas, M Guadalupe

    2015-10-01

    Crowding conditions of larvae may have a significant impact on commercial production efficiency of some insects, such as Tenebrio molitor L. (Coleoptera: Tenebrionidae). Although larval densities are known to affect developmental time and growth in T. molitor, no reports were found on the effects of crowding on food utilization. The effect of larval density on food utilization efficiency of T. molitor larvae was studied by measuring efficiency of ingested food conversion (ECI), efficiency of digested food conversion (EDC), and mg of larval weight gain per gram of food consumed (LWGpFC) at increasing larval densities (12, 24, 36, 48, 50, 62, 74, and 96 larvae per dm(2)) over four consecutive 3-wk periods. Individual larval weight gain and food consumption were negatively impacted by larval density. Similarly, ECI, ECD, and LWGpFC were negatively impacted by larval density. Larval ageing, measured as four consecutive 3-wk periods, significantly and independently impacted ECI, ECD, and LWGpFC in a negative way. General linear model analysis showed that age had a higher impact than density on food utilization parameters of T. molitor larvae. Larval growth was determined to be responsible for the age effects, as measurements of larval mass density (in grams of larvae per dm(2)) had a significant impact on food utilization parameters across ages and density treatments (in number of larvae per dm(2)). The importance of mass versus numbers per unit of area as measurements of larval density and the implications of negative effects of density on food utilization for insect biomass production are discussed. Published by Oxford University Press on behalf of Entomological Society of America 2015. This work is written by US Government employees and is in the public domain in the US.

  14. Smart Kinesthetic Measurement Model in Dance Composision

    OpenAIRE

    Triana, Dinny Devi

    2017-01-01

    This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art ...

  15. A novel approach towards fatigue damage prognostics of composite materials utilizing SHM data and stochastic degradation modeling

    NARCIS (Netherlands)

    Loutas, T.; Eleftheroglou, N.

    2016-01-01

    A prognostic framework is proposed in order to estimate the remaining useful life of composite materials under fatigue loading based on acoustic emission data and a sophisticated Non Homogenous Hidden Semi Markov Model. Bayesian neural networks are also utilized as an alternative machine learning

  16. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  17. Recent advances in cross-cultural measurement in psychiatric epidemiology: utilizing 'what matters most' to identify culture-specific aspects of stigma.

    Science.gov (United States)

    Yang, Lawrence Hsin; Thornicroft, Graham; Alvarado, Ruben; Vega, Eduardo; Link, Bruce George

    2014-04-01

    While stigma measurement across cultures has assumed growing importance in psychiatric epidemiology, it is unknown to what extent concepts arising from culture have been incorporated. We utilize a formulation of culture-as the everyday interactions that 'matter most' to individuals within a cultural group-to identify culturally-specific stigma dynamics relevant to measurement. A systematic literature review from January 1990 to September 2012 was conducted using PsycINFO, Medline and Google Scholar to identify articles studying: (i) mental health stigma-related concepts; (ii) ≥ 1 non-Western European cultural group. From 5292 abstracts, 196 empirical articles were located. The vast majority of studies (77%) utilized adaptations of existing Western-developed stigma measures to new cultural groups. Extremely few studies (2.0%) featured quantitative stigma measures derived within a non-Western European cultural group. A sizeable amount (16.8%) of studies employed qualitative methods to identify culture-specific stigma processes. The 'what matters most' perspective identified cultural ideals of the everyday activities that comprise 'personhood' of 'preserving lineage' among specific Asian groups, 'fighting hard to overcome problems and taking advantage of immigration opportunities' among specific Latino-American groups, and 'establishing trust among religious institutions due to institutional discrimination' among African-American groups. These essential cultural interactions shaped culture-specific stigma manifestations. Mixed method studies (3.6%) corroborated these qualitative results. Quantitatively-derived, culturally-specific stigma measures were lacking. Further, the vast majority of qualitative studies on stigma were conducted without using stigma-specific frameworks. We propose the 'what matters most' approach to address this key issue in future research.

  18. Thermal Properties Measurement Report

    Energy Technology Data Exchange (ETDEWEB)

    Carmack, Jon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Braase, Lori [Idaho National Lab. (INL), Idaho Falls, ID (United States); Papesch, Cynthia [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hurley, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gofryk, Krzysztof [Idaho National Lab. (INL), Idaho Falls, ID (United States); Harp, Jason [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fielding, Randy [Idaho National Lab. (INL), Idaho Falls, ID (United States); Knight, Collin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Meyer, Mitch [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-08-01

    The Thermal Properties Measurement Report summarizes the research, development, installation, and initial use of significant experimental thermal property characterization capabilities at the INL in FY 2015. These new capabilities were used to characterize a U3Si2 (candidate Accident Tolerant) fuel sample fabricated at the INL. The ability to perform measurements at various length scales is important and provides additional data that is not currently in the literature. However, the real value of the data will be in accomplishing a phenomenological understanding of the thermal conductivity in fuels and the ties to predictive modeling. Thus, the MARMOT advanced modeling and simulation capability was utilized to illustrate how the microstructural data can be modeled and compared with bulk characterization data. A scientific method was established for thermal property measurement capability on irradiated nuclear fuel samples, which will be installed in the Irradiated Material Characterization Laboratory (IMCL).

  19. INVESTIGATION OF QUANTIFICATION OF FLOOD CONTROL AND WATER UTILIZATION EFFECT OF RAINFALL INFILTRATION FACILITY BY USING WATER BALANCE ANALYSIS MODEL

    OpenAIRE

    文, 勇起; BUN, Yuki

    2013-01-01

    In recent years, many flood damage and drought attributed to urbanization has occurred. At present infiltration facility is suggested for the solution of these problems. Based on this background, the purpose of this study is investigation of quantification of flood control and water utilization effect of rainfall infiltration facility by using water balance analysis model. Key Words : flood control, water utilization , rainfall infiltration facility

  20. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  1. Statistical properties of four effect-size measures for mediation models.

    Science.gov (United States)

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  2. Radiation budget measurement/model interface

    Science.gov (United States)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  3. Measurement and modeling of two-phase flow parameters in scaled 8 Multiplication-Sign 8 BWR rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X.; Schlegel, J.P.; Liu, Y.; Paranjape, S.; Hibiki, T. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907-2017 (United States); Ishii, M., E-mail: ishii@purdue.edu [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907-2017 (United States)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Grid spacers have a significant but not well understood effect on flow behavior and development. Black-Right-Pointing-Pointer Two different length scales are present in rod bundles, which must be accounted for in modeling. Black-Right-Pointing-Pointer An easy-to-implement empirical model has been developed for the two-phase friction multiplier. - Abstract: The behavior of reactor systems is predicted using advanced computational codes in order to determine the safety characteristics of the system during various accidents and to determine the performance characteristics of the reactor. These codes generally utilize the two-fluid model for predictions of two-phase flows, as this model is the most accurate and detailed model which is currently practical for predicting large-scale systems. One of the weaknesses of this approach however is the need to develop constitutive models for various quantities. Of specific interest are the models used in the prediction of void fraction and pressure drop across the rod bundle due to their importance in new Natural Circulation Boiling Water Reactor (NCBWR) designs, where these quantities determine the coolant flow rate through the core. To verify the performance of these models and expand the existing experimental database, data has been collected in an 8 Multiplication-Sign 8 rod bundle which is carefully scaled from actual BWR geometry and includes grid spacers to maintain rod spacing. While these spacer grids are 'generic', their inclusion does provide valuable data for analysis of the effect of grid spacers on the flow. In addition to pressure drop measurements the area-averaged void fraction has been measured by impedance void meters and local conductivity probes have been used to measure the local void fraction and interfacial area concentration in the bundle subchannels. Experimental conditions covered a wide range of flow rates and void fractions up to 80%.

  4. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    Science.gov (United States)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  5. 4M Overturned Pyramid (MOP) Model Utilization: Case Studies on Collision in Indonesian and Japanese Maritime Traffic Systems (MTS)

    OpenAIRE

    Wanginingastuti Mutmainnah; Masao Furusho

    2016-01-01

    4M Overturned Pyramid (MOP) model is a new model, proposed by authors, to characterized MTS which is adopting epidemiological model that determines causes of accidents, including not only active failures but also latent failures and barriers. This model is still being developed. One of utilization of MOP model is characterizing accidents in MTS, i.e. collision in Indonesia and Japan that is written in this paper. The aim of this paper is to show the characteristics of ship collision accidents...

  6. A Developmental Examination of the Psychometric Properties and Predictive Utility of a Revised Psychological Self-Concept Measure for Preschool-Aged Children

    Science.gov (United States)

    Jia, Rongfang; Lang, Sarah N.; Schoppe-Sullivan, Sarah J.

    2015-01-01

    Accurate assessment of psychological self-concept in early childhood relies on the development of psychometrically sound instruments. From a developmental perspective, the current study revised an existing measure of young children's psychological self-concepts, the Child Self-View Questionnaire (CSVQ, Eder, 1990), and examined its psychometric properties using a sample of preschool-aged children assessed at approximately 4 years old with a follow-up at age 5 (N = 111). The item compositions of lower-order dimensions were revised, leading to improved internal consistency. Factor Analysis revealed three latent psychological self-concept factors (i.e., Sociability, Control, and Assurance) from the lower-order dimensions. Measurement invariance by gender was supported for Sociability and Assurance, not for Control. Test-retest reliability was supported by stability of the psychological self-concept measurement model during the preschool years, although some evidence of increasing differentiation was obtained. Validity of children's scores on the three latent psychological self-concept factors was tested by investigating their concurrent associations with teacher-reported behavioral adjustment on the Social Competence and Behavior Evaluation Scale – Short Form (SCBE-SF, LaFreniere & Dumas, 1996). Children who perceived themselves as higher in Sociability at 5 years old displayed less internalizing behavior and more social competence; boys who perceived themselves as higher in Control at age 4 exhibited lower externalizing behavior; children higher in Assurance had greater social competence at age 4, but displayed more externalizing behavior at age 5. Implications relevant to the utility of the revised psychological self-concept measure are discussed. PMID:26098231

  7. A developmental examination of the psychometric properties and predictive utility of a revised psychological self-concept measure for preschool-age children.

    Science.gov (United States)

    Jia, Rongfang; Lang, Sarah N; Schoppe-Sullivan, Sarah J

    2016-02-01

    Accurate assessment of psychological self-concept in early childhood relies on the development of psychometrically sound instruments. From a developmental perspective, the current study revised an existing measure of young children's psychological self-concepts, the Child Self-View Questionnaire (CSVQ; Eder, 1990), and examined its psychometric properties using a sample of preschool-age children assessed at approximately 4 years old with a follow-up at age 5 (N = 111). The item compositions of lower order dimensions were revised, leading to improved internal consistency. Factor analysis revealed 3 latent psychological self-concept factors (i.e., sociability, control, and assurance) from the lower order dimensions. Measurement invariance by gender was supported for sociability and assurance, not for control. Test-retest reliability was supported by stability of the psychological self-concept measurement model during the preschool years, although some evidence of increasing differentiation was obtained. Validity of children's scores on the 3 latent psychological self-concept factors was tested by investigating their concurrent associations with teacher-reported behavioral adjustment on the Social Competence and Behavior Evaluation Scale-Short Form (SCBE-SF; LaFreniere & Dumas, 1996). Children who perceived themselves as higher in sociability at 5 years old displayed less internalizing behavior and more social competence; boys who perceived themselves as higher in control at age 4 exhibited lower externalizing behavior; children higher in assurance had greater social competence at age 4, but displayed more externalizing behavior at age 5. Implications relevant to the utility of the revised psychological self-concept measure are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    Science.gov (United States)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  9. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  10. Feasibility study of environmentally friendly type coal utilization systems. Feasibility study of environmentally friendly type coal utilization systems in sectors except the coal industry in China; Kankyo chowagata sekitan riyo system kanosei chosa. Chugoku no sekitan kogyo igai no bumon ni okeru kankyo chowagata sekitan riyo system kanosei chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    For the purpose of working out a comprehensive master plan for application of the coal utilization system, the paper surveyed and studied the coal utilization system in terms of environmental measures and efficiency improvement in the utilization of coal. As a result of the discussion with NEDO and the National Planning Committee of China, Liaoning Province (the whole China) and Shenyang City were selected as a model area and a model city for the survey and study. As energy conservation measures taken in the former, desirable are intensifying/capacity-increase of boilers, kilns, etc. and adoption of new-type/high-efficient equipment. Also expected are reinforcement of combustion control and improvement of efficiency by using coal preparation, industrial use coal briquette, etc. Measures taken in the latter are the same as those taken in the whole China. As SOx reduction measures for Liaoning Province, desirable is installation of dry-type desulfurization equipment and simple desulfurization equipment. As dust prevention measures for it, desirable is installation of electrostatic precipitators or high-functional bag filters. SOx reduction measures for Shenyang City are the same as those taken in the whole China. SOx can be reduced by using coal-prepared low-sulfur coal and industrial use coal briquette added with desulfurizing agent. 88 figs., 163 tabs.

  11. Integrating utilization-focused evaluation with business process modeling for clinical research improvement.

    Science.gov (United States)

    Kagan, Jonathan M; Rosas, Scott; Trochim, William M K

    2010-10-01

    New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

  12. Precision gravity measurement utilizing Accelerex vibrating beam accelerometer technology

    Science.gov (United States)

    Norling, Brian L.

    Tests run using Sundstrand vibrating beam accelerometers to sense microgravity are described. Lunar-solar tidal effects were used as a highly predictable signal which varies by approximately 200 billionths of the full-scale gravitation level. Test runs of 48-h duration were used to evaluate stability, resolution, and noise. Test results on the Accelerex accelerometer show accuracies suitable for precision applications such as gravity mapping and gravity density logging. The test results indicate that Accelerex technology, even with an instrument design and signal processing approach not optimized for microgravity measurement, can achieve 48-nano-g (1 sigma) or better accuracy over a 48-h period. This value includes contributions from instrument noise and random walk, combined bias and scale factor drift, and thermal modeling errors as well as external contributions from sampling noise, test equipment inaccuracies, electrical noise, and cultural noise induced acceleration.

  13. The Latent Class Model as a Measurement Model for Situational Judgment Tests

    Directory of Open Access Journals (Sweden)

    Frank Rijmen

    2011-11-01

    Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.

  14. Utilizing Chamber Data for Developing and Validating Climate Change Models

    Science.gov (United States)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  15. Analysis on misconducts and inappropriate practices by Japan's Nuclear Power Utilities and Assessment of their corrective measures

    International Nuclear Information System (INIS)

    Torikai, Seishi; Ozawa, Michihiro; Kanegae, Naomichi; Tani, Masaaki; Miyakoshi, Naoki; Madarame, Haruki

    2010-01-01

    On March 30, 2007, Japan's electric utilities reported the results of a complete review of their powergenerating units to the Nuclear and Industrial Safety Agency of the Ministry of Economy, Trade, and Industry (METI). The Ethics Committee of the Atomic Energy Society of Japan (AESJ) then recommended an assessment method to analyze the seriousness of the problems from multiple perspectives in order to support the public's understanding of the reported problems. Accordingly, the Ethics Committee conducted the assessment. The assessment considered each reported problem associated with nuclear power-generating units and the preventive measures completed between June 2007 and September 2008 (corrective measures continued beyond that period). The results were presented at the autumn conferences of AESJ in 2007 and 2008, and are discussed in this report. (author)

  16. The feasibility of utilizing remotely sensed data to assess and monitor oceanic gamefish

    Science.gov (United States)

    Savastano, K. J.; Leming, T. D.

    1975-01-01

    An investigation was conducted to establish the feasibility of utilizing remotely sensed data acquired from aircraft and satellite platforms to provide information concerning the distribution and abundance of oceanic gamefish. The data from the test area was jointly acquired by NASA, the Navy, the Air Force and NOAA/NMFS elements and private and professional fishermen in the northeastern Gulf of Mexico. The data collected has made it possible to identify fisheries significant environmental parameters for white marlin. Prediction models, based on catch data and surface truth information, were developed and demonstrated a potential for significantly reducing search by identifying areas that have a high probability of productivity. Three of the parameters utilized by the models, chlorophyll-a, sea surface temperature, and turbidity were inferred from aircraft sensor data and were tested. Effective use of Skylab data was inhibited by cloud cover and delayed delivery. Initial efforts toward establishing the feasibility of utilizing remotely sensed data to assess and monitor the distribution of oceanic gamefish has successfully identified fisheries significant oceanographic parameters and demonstrated the capability of remotely measuring most of the parameters.

  17. Application of a disease-specific mapping function to estimate utility gains with effective treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Rupnow Marcia FT

    2005-09-01

    Full Text Available Abstract Background Most tools for estimating utilities use clinical trial data from general health status models, such as the 36-Item Short-Form Health Survey (SF-36. A disease-specific model may be more appropriate. The objective of this study was to apply a disease-specific utility mapping function for schizophrenia to data from a large, 1-year, open-label study of long-acting risperidone and to compare its performance with an SF-36-based utility mapping function. Methods Patients with schizophrenia or schizoaffective disorder by DSM-IV criteria received 25, 50, or 75 mg long-acting risperidone every 2 weeks for 12 months. The Positive and Negative Syndrome Scale (PANSS and SF-36 were used to assess efficacy and health-related quality of life. Movement disorder severity was measured using the Extrapyramidal Symptom Rating Scale (ESRS; data concerning other common adverse effects (orthostatic hypotension, weight gain were collected. Transforms were applied to estimate utilities. Results A total of 474 patients completed the study. Long-acting risperidone treatment was associated with a utility gain of 0.051 using the disease-specific function. The estimated gain using an SF-36-based mapping function was smaller: 0.0285. Estimates of gains were only weakly correlated (r = 0.2. Because of differences in scaling and variance, the requisite sample size for a randomized trial to confirm observed effects is much smaller for the disease-specific mapping function (156 versus 672 total subjects. Conclusion Application of a disease-specific mapping function was feasible. Differences in scaling and precision suggest the clinically based mapping function has greater power than the SF-36-based measure to detect differences in utility.

  18. The Utility of the UTAUT Model in Explaining Mobile Learning Adoption in Higher Education in Guyana

    Science.gov (United States)

    Thomas, Troy Devon; Singh, Lenandlar; Gaffar, Kemuel

    2013-01-01

    In this paper, we compare the utility of modified versions of the unified theory of acceptance and use of technology (UTAUT) model in explaining mobile learning adoption in higher education in a developing country and evaluate the size and direction of the impacts of the UTAUT factors on behavioural intention to adopt mobile learning in higher…

  19. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  20. The laboratory test utilization management toolbox.

    Science.gov (United States)

    Baird, Geoffrey

    2014-01-01

    Efficiently managing laboratory test utilization requires both ensuring adequate utilization of needed tests in some patients and discouraging superfluous tests in other patients. After the difficult clinical decision is made to define the patients that do and do not need a test, a wealth of interventions are available to the clinician and laboratorian to help guide appropriate utilization. These interventions are collectively referred to here as the utilization management toolbox. Experience has shown that some tools in the toolbox are weak and other are strong, and that tools are most effective when many are used simultaneously. While the outcomes of utilization management studies are not always as concrete as may be desired, what data is available in the literature indicate that strong utilization management interventions are safe and effective measures to improve patient health and reduce waste in an era of increasing financial pressure.

  1. The validity, reliability, and utility of the iButton® for measurement of body temperature circadian rhythms in sleep/wake research.

    Science.gov (United States)

    Hasselberg, Michael J; McMahon, James; Parker, Kathy

    2013-01-01

    Changes in core body temperature due to heat transfer through the skin have a major influence on sleep regulation. Traditional measures of skin temperature are often complicated by extensive wiring and are not practical for use in normal living conditions. This review describes studies examining the reliability, validity and utility of the iButton®, a wireless peripheral thermometry device, in sleep/wake research. A review was conducted of English language literature on the iButton as a measure of circadian body temperature rhythms associated with the sleep/wake cycle. Seven studies of the iButtton as a measure of human body temperature were included. The iButton was found to be a reliable and valid measure of body temperature. Its application to human skin was shown to be comfortable and tolerable with no significant adverse reactions. Distal skin temperatures were negatively correlated with sleep/wake activity, and the temperature gradient between the distal and proximal skin (DPG) was identified as an accurate physiological correlate of sleep propensity. Methodological issues included site of data logger placement, temperature masking factors, and temperature data analysis. The iButton is an inexpensive, wireless data logger that can be used to obtain a valid measurement of human skin temperature. It is a practical alternative to traditional measures of circadian rhythms in sleep/wake research. Further research is needed to determine the utility of the iButton in vulnerable populations, including those with neurodegenerative disorders and memory impairment and pediatric populations. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Health-related quality of life of cataract patients: cross-cultural comparisons of utility and psychometric measures.

    Science.gov (United States)

    Lee, Jae Eun; Fos, Peter J; Zuniga, Miguel A; Kastl, Peter R; Sung, Jung Hye

    2003-07-01

    This study was conducted to assess the presence and/or absence of cross-cultural differences or similarities between Korean and United States cataract patients. A systematic assessment was performed using utility and psychometric measures in the study population. A cross-sectional study design was used to examine the comparison of preoperative outcomes measures in cataract patients in Korea and the United States. Study subjects were selected using non-probabilistic methods and included 132 patients scheduled for cataract surgery in one eye. Subjects were adult cataract patients at Samsung and Kunyang General Hospital in Seoul, Korea, and Tulane University Hospital and Clinics in New Orleans, Louisiana. Preoperative utility was assessed using the verbal rating scale and standard reference gamble techniques. Current preoperative health status was assessed using the SF-36 and VF-14 surveys. Current preoperative Snellen visual acuity was used as a clinical measure of vision status. Korean patients were more likely to be younger (p = 0.001), less educated (p = 0.001), and to have worse Snellen visual acuity (p = 0.002) than United States patients. Multivariate analysis of variance (MANOVA) revealed that in contrast to Korean patients, United States patients were assessed to have higher scoring in general health, vitality, VF-14, and verbal rating for visual health. This higher scoring trend persisted after controlling for age, gender, education and Snellen visual acuity. The difference in health-related quality of life (HRQOL) between the two countries was quite clear, especially in the older age and highly educated group. Subjects in Korea and the United States were significantly different in quality of life, functional status and clinical outcomes. Subjects in the United States had more favorable health outcomes than those in Korea. These differences may be caused by multiple factors, including country-specific differences in economic status, health care system

  3. Parametric utility comparison of coal and nuclear electricity generation

    International Nuclear Information System (INIS)

    Maurer, K.M.

    1977-02-01

    The advantages and limitations of an explicit quantitative model for decision making are discussed. Several different quantitative models are presented, noting that the use of an expected utility maximization decision rule allows both the direct incorporation of multidimensional descriptions of the possible outcomes, and considerations of risk averse behavior. A broad class of utility functions, characterized by linear risk tolerance, was considered and extended to a multidimensional form. Choosing a multivariate risk neutral extension, using constant absolute risk aversion utility functions for monetary effects and for increased mortality, the author indicated how the parameters of this utility function can be selected to represent the decision maker's preferences, and suggest a reasonable range of values for the parameters. After describing an illustrative set of data on the risks inherent in coal burning and nuclear electricity generation facilities, the author used the chosen utility model to compare the overall risks associated with each technology, observing the effect of variations in the utility parameters and in the risk distributions on the implied preferences

  4. Climate change adaptation in regulated water utilities

    Science.gov (United States)

    Vicuna, S.; Melo, O.; Harou, J. J.; Characklis, G. W.; Ricalde, I.

    2017-12-01

    Concern about climate change impacts on water supply systems has grown in recent years. However, there are still few examples of pro-active interventions (e.g. infrastructure investment or policy changes) meant to address plausible future changes. Deep uncertainty associated with climate impacts, future demands, and regulatory constraints might explain why utility planning in a range of contexts doesn't explicitly consider climate change scenarios and potential adaptive responses. Given the importance of water supplies for economic development and the cost and longevity of many water infrastructure investments, large urban water supply systems could suffer from lack of pro-active climate change adaptation. Water utilities need to balance the potential for high regret stranded assets on the one side, with insufficient supplies leading to potentially severe socio-economic, political and environmental failures on the other, and need to deal with a range of interests and constraints. This work presents initial findings from a project looking at how cities in Chile, the US and the UK are developing regulatory frameworks that incorporate utility planning under uncertainty. Considering for example the city of Santiago, Chile, recent studies have shown that although high scarcity cost scenarios are plausible, pre-emptive investment to guard from possible water supply failures is still remote and not accommodated by current planning practice. A first goal of the project is to compare and contrast regulatory approaches to utility risks considering climate change adaptation measures. Subsequently we plan to develop and propose a custom approach for the city of Santiago based on lessons learned from other contexts. The methodological approach combines institutional assessment of water supply regulatory frameworks with simulation-based decision-making under uncertainty approaches. Here we present initial work comparing the regulatory frameworks in Chile, UK and USA evaluating

  5. Innovative practice model to optimize resource utilization and improve access to care for high-risk and BRCA+ patients.

    Science.gov (United States)

    Head, Linden; Nessim, Carolyn; Usher Boyd, Kirsty

    2017-02-01

    Bilateral prophylactic mastectomy (BPM) has demonstrated breast cancer risk reduction in high-risk/ BRCA + patients. However, priority of active cancers coupled with inefficient use of operating room (OR) resources presents challenges in offering BPM in a timely manner. To address these challenges, a rapid access prophylactic mastectomy and immediate reconstruction (RAPMIR) program was innovated. The purpose of this study was to evaluate RAPMIR with regards to access to care and efficiency. We retrospectively reviewed the cases of all high-risk/ BRCA + patients having had BPM between September 2012 and August 2014. Patients were divided into 2 groups: those managed through the traditional model and those managed through the RAPMIR model. RAPMIR leverages 2 concurrently running ORs with surgical oncology and plastic surgery moving between rooms to complete 3 combined BPMs with immediate reconstruction in addition to 1-2 independent cases each operative day. RAPMIR eligibility criteria included high-risk/ BRCA + status; BPM with immediate, implant-based reconstruction; and day surgery candidacy. Wait times, case volumes and patient throughput were measured and compared. There were 16 traditional patients and 13 RAPMIR patients. Mean wait time (days from referral to surgery) for RAPMIR was significantly shorter than for the traditional model (165.4 v. 309.2 d, p = 0.027). Daily patient throughput (4.3 v. 2.8), plastic surgery case volume (3.7 v. 1.6) and surgical oncology case volume (3.0 v. 2.2) were significantly greater in the RAPMIR model than the traditional model ( p = 0.003, p < 0.001 and p = 0.015, respectively). A multidisciplinary model with optimized scheduling has the potential to improve access to care and optimize resource utilization.

  6. Modeling ramp-hold indentation measurements based on Kelvin-Voigt fractional derivative model

    Science.gov (United States)

    Zhang, Hongmei; zhe Zhang, Qing; Ruan, Litao; Duan, Junbo; Wan, Mingxi; Insana, Michael F.

    2018-03-01

    Interpretation of experimental data from micro- and nano-scale indentation testing is highly dependent on the constitutive model selected to relate measurements to mechanical properties. The Kelvin-Voigt fractional derivative model (KVFD) offers a compact set of viscoelastic features appropriate for characterizing soft biological materials. This paper provides a set of KVFD solutions for converting indentation testing data acquired for different geometries and scales into viscoelastic properties of soft materials. These solutions, which are mostly in closed-form, apply to ramp-hold relaxation, load-unload and ramp-load creep-testing protocols. We report on applications of these model solutions to macro- and nano-indentation testing of hydrogels, gastric cancer cells and ex vivo breast tissue samples using an atomic force microscope (AFM). We also applied KVFD models to clinical ultrasonic breast data using a compression plate as required for elasticity imaging. Together the results show that KVFD models fit a broad range of experimental data with a correlation coefficient typically R 2  >  0.99. For hydrogel samples, estimation of KVFD model parameters from test data using spherical indentation versus plate compression as well as ramp relaxation versus load-unload compression all agree within one standard deviation. Results from measurements made using macro- and nano-scale indentation agree in trend. For gastric cell and ex vivo breast tissue measurements, KVFD moduli are, respectively, 1/3-1/2 and 1/6 of the elasticity modulus found from the Sneddon model. In vivo breast tissue measurements yield model parameters consistent with literature results. The consistency of results found for a broad range of experimental parameters suggest the KVFD model is a reliable tool for exploring intrinsic features of the cell/tissue microenvironments.

  7. Impact of Predicting Health Care Utilization Via Web Search Behavior: A Data-Driven Analysis.

    Science.gov (United States)

    Agarwal, Vibhu; Zhang, Liangliang; Zhu, Josh; Fang, Shiyuan; Cheng, Tim; Hong, Chloe; Shah, Nigam H

    2016-09-21

    utilization score, served as a surrogate measure of the model's utility. We obtained the highest area under the curve (0.796) in medical visit prediction with our random forests model and daywise features. Ablating feature categories one at a time showed that the model performance worsened the most when location features were dropped. An online evaluation in which advertisements were served to users who had a high predicted probability of a future medical visit showed a 3.96% increase in the show conversion rate. Results from our experiments done in a research setting suggest that it is possible to accurately predict future patient visits from geotagged mobile search logs. Results from the offline and online experiments on the utility of health utilization predictions suggest that such prediction can have utility for health care providers.

  8. Dental Care Utilization for Examination and Regional Deprivation

    Science.gov (United States)

    Kim, Cheol-Sin; Han, Sun-Young; Lee, Seung Eun; Kang, Jeong-Hee; Kim, Chul-Woung

    2015-01-01

    Objectives: Receiving proper dental care plays a significant role in maintaining good oral health. We investigated the relationship between regional deprivation and dental care utilization. Methods: Multilevel logistic regression was used to identify the relationship between the regional deprivation level and dental care utilization purpose, adjusting for individual-level variables, in adults aged 19+ in the 2008 Korean Community Health Survey (n=220 258). Results: Among Korean adults, 12.8% used dental care to undergo examination and 21.0% visited a dentist for other reasons. In the final model, regional deprivation level was associated with significant variations in dental care utilization for examination (pdental care utilization for other reasons in the final model. Conclusions: This study’s findings suggest that policy interventions should be considered to reduce regional variations in rates of dental care utilization for examination. PMID:26265665

  9. Non-parametric estimation of the individual's utility map

    OpenAIRE

    Noguchi, Takao; Sanborn, Adam N.; Stewart, Neil

    2013-01-01

    Models of risky choice have attracted much attention in behavioural economics. Previous research has repeatedly demonstrated that individuals' choices are not well explained by expected utility theory, and a number of alternative models have been examined using carefully selected sets of choice alternatives. The model performance however, can depend on which choice alternatives are being tested. Here we develop a non-parametric method for estimating the utility map over the wide range of choi...

  10. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    Science.gov (United States)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  11. Prediction of Adequate Prenatal Care Utilization Based on the Extended Parallel Process Model.

    Science.gov (United States)

    Hajian, Sepideh; Imani, Fatemeh; Riazi, Hedyeh; Salmani, Fatemeh

    2017-10-01

    Pregnancy complications are one of the major public health concerns. One of the main causes of preventable complications is the absence of or inadequate provision of prenatal care. The present study was conducted to investigate whether Extended Parallel Process Model's constructs can predict the utilization of prenatal care services. The present longitudinal prospective study was conducted on 192 pregnant women selected through the multi-stage sampling of health facilities in Qeshm, Hormozgan province, from April to June 2015. Participants were followed up from the first half of pregnancy until their childbirth to assess adequate or inadequate/non-utilization of prenatal care services. Data were collected using the structured Risk Behavior Diagnosis Scale. The analysis of the data was carried out in SPSS-22 using one-way ANOVA, linear regression and logistic regression analysis. The level of significance was set at 0.05. Totally, 178 pregnant women with a mean age of 25.31±5.42 completed the study. Perceived self-efficacy (OR=25.23; Pprenatal care. Husband's occupation in the labor market (OR=0.43; P=0.02), unwanted pregnancy (OR=0.352; Pcare for the minors or elderly at home (OR=0.35; P=0.045) were associated with lower odds of receiving prenatal care. The model showed that when perceived efficacy of the prenatal care services overcame the perceived threat, the likelihood of prenatal care usage will increase. This study identified some modifiable factors associated with prenatal care usage by women, providing key targets for appropriate clinical interventions.

  12. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  13. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna [ed.

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland has been

  14. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  15. Clinical Utility of Noninvasive Method to Measure Specific Gravity in the Pediatric Population.

    Science.gov (United States)

    Hall, Jeanine E; Huynh, Pauline P; Mody, Ameer P; Wang, Vincent J

    2018-04-01

    Clinicians rely on any combination of signs and symptoms, clinical scores, or invasive procedures to assess the hydration status in children. Noninvasive tests to evaluate for dehydration in the pediatric population are appealing. The objective of our study is to assess the utility of measuring specific gravity of tears compared to specific gravity of urine and the clinical assessment of dehydration. We conducted a prospective cohort convenience sample study, in a pediatric emergency department at a tertiary care children's hospital. We approached parents/guardians of children aged 6 months to 4 years undergoing transurethral catheterization for evaluation of urinary tract infection for enrollment. We collected tears and urine for measurement of tear specific gravity (TSG) and urine specific gravity (USG), respectively. Treating physicians completed dehydration assessment forms to assess for hydration status. Among the 60 participants included, the mean TSG was 1.0183 (SD = 0.007); the mean USG was 1.0186 (SD = 0.0083). TSG and USG were positively correlated with each other (Pearson Correlation = 0.423, p = 0.001). Clinical dehydration scores ranged from 0 to 3, with 87% assigned a score of 0, by physician assessment. Mean number of episodes of vomiting and diarrhea in a 24-hour period were 2.2 (SD = 3.9) and 1.5 (SD = 3.2), respectively. Sixty-two percent of parents reported decreased oral intake. TSG measurements yielded similar results compared with USG. Further studies are needed to determine if TSG can be used as a noninvasive method of dehydration assessment in children. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Simple transmission Raman measurements using a single multivariate model for analysis of pharmaceutical samples contained in capsules of different colors.

    Science.gov (United States)

    Lee, Yeojin; Kim, Jaejin; Lee, Sanguk; Woo, Young-Ah; Chung, Hoeil

    2012-01-30

    Direct transmission Raman measurements for analysis of pharmaceuticals in capsules are advantageous since they can be used to determine active pharmaceutical ingredient (API) concentrations in a non-destructive manner and with much less fluorescence background interference from the capsules themselves compared to conventional back-scattering measurements. If a single calibration model such as developed from spectra simply collected in glass vials could be used to determine API concentrations of samples contained in capsules of different colors rather than constructing individual models for each capsule color, the utility of transmission measurements would be further enhanced. To evaluate the feasibility, transmission Raman spectra of binary mixtures of ambroxol and lactose were collected in a glass vial and a partial least squares (PLS) model for the determination of ambroxol concentration was developed. Then, the model was directly applied to determine ambroxol concentrations of samples contained in capsules of 4 different colors (blue, green, white and yellow). Although the prediction performance was slightly degraded when the samples were placed in blue or green capsules, due to the presence of weak fluorescence, accurate determination of ambroxol was generally achieved in all cases. The prediction accuracy was also investigated when the thickness of the capsule was varied. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Utilizing Three-Dimensional Printing Technology to Assess the Feasibility of High-Fidelity Synthetic Ventricular Septal Defect Models for Simulation in Medical Education.

    Science.gov (United States)

    Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S

    2014-07-01

    The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.

  18. Utility of sonographic measurement of the common tensor tendon in patients with lateral epicondylitis.

    Science.gov (United States)

    Lee, Min Hee; Cha, Jang Gyu; Jin, Wook; Kim, Byung Sung; Park, Jai Soung; Lee, Hae Kyung; Hong, Hyun Sook

    2011-06-01

    The purpose of this article is to evaluate prospectively the utility of sonographic measurements of the common extensor tendon for diagnosing lateral epicondylitis. Forty-eight patients with documented lateral epicondylitis and 63 healthy volunteers were enrolled and underwent ultrasound of the elbow joint. The common extensor tendon overlying the bony landmark was scanned transversely, and the cross-section area and the maximum thickness were measured. Clinical examination was used as the reference standard in the diagnosis of lateral epicondylitis. Data from the patient and control groups were compared with established optimal diagnostic criteria for lateral epicondylitis using receiver operating characteristic curves. Qualitative evaluation with grayscale ultrasound was also performed on patients and healthy volunteers. The common extensor tendon was significantly thicker in patients with lateral epicondylitis than in control subjects (p lateral epicondylitis. For qualitative evaluation with gray-scale ultrasound, overall sensitivity, specificity, and accuracy values in the diagnosis of lateral epicondylitis were 76.5%, 76.2%, and 76.3%, respectively. The quantitative sonographic measurements had an excellent diagnostic performance for lateral epicondylitis, as well as good or excellent interreader agreement. A common extensor tendon cross-section area greater than or equal to 32 mm(2) and a thickness of 4.2 mm correlated well with the presence of lateral epicondylitis. However, further prospective study is necessary to determine whether quantitative ultrasound with these cutoff values can improve the accuracy of the diagnosis of lateral epicondylitis.

  19. DISCERNING EXOPLANET MIGRATION MODELS USING SPIN-ORBIT MEASUREMENTS

    International Nuclear Information System (INIS)

    Morton, Timothy D.; Johnson, John Asher

    2011-01-01

    We investigate the current sample of exoplanet spin-orbit measurements to determine whether a dominant planet migration channel can be identified, and at what confidence. We use the predictions of Kozai migration plus tidal friction and planet-planet scattering as our misalignment models, and we allow for a fraction of intrinsically aligned systems, explainable by disk migration. Bayesian model comparison demonstrates that the current sample of 32 spin-orbit measurements strongly favors a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. Our analysis indicates that between 34% and 76% of close-in planets (95% confidence) migrated via planet-planet scattering. Separately analyzing the subsample of 12 stars with T eff >6250 K-which Winn et al. predict to be the only type of stars to maintain their primordial misalignments-we find that the data favor a single-mode scattering model over Kozai with 85% confidence. We also assess the number of additional hot star spin-orbit measurements that will likely be necessary to provide a more confident model selection, finding that an additional 20-30 measurement has a >50% chance of resulting in a 95% confident model selection, if the current model selection is correct. While we test only the predictions of particular Kozai and scattering migration models in this work, our methods may be used to test the predictions of any other spin-orbit misaligning mechanism.

  20. Women’s autonomy and maternal healthcare service utilization in Ethiopia

    Directory of Open Access Journals (Sweden)

    Fentanesh Nibret Tiruneh

    2017-11-01

    Full Text Available Abstract Background Most previous studies on healthcare service utilization in low-income countries have not used a multilevel study design to address the importance of community-level women’s autonomy. We assessed whether women’s autonomy, measured at both individual and community levels, is associated with maternal healthcare service utilization in Ethiopia. Methods We analyzed data from the 2005 and 2011 Ethiopia Demographic and Health Surveys (N = 6058 and 7043, respectively for measuring women’s decision-making power and permissive gender norms associated with wife beating. We used Spearman’s correlation and the chi-squared test for bivariate analyses and constructed generalized estimating equation logistic regression models to analyze the associations between women’s autonomy indicators and maternal healthcare service utilization with control for other socioeconomic characteristics. Results Our multivariate analysis showed that women living in communities with a higher percentage of opposing attitudes toward wife beating were more likely to use all three types of maternal healthcare services in 2011 (adjusted odds ratios = 1.21, 1.23, and 1.18 for four or more antenatal care visits, health facility delivery, and postnatal care visits, respectively. In 2005, the adjusted odds ratios were 1.16 and 1.17 for four or more antenatal care visits and health facility delivery, respectively. In 2011, the percentage of women in the community with high decision-making power was positively associated with the likelihood of four or more antenatal care visits (adjusted odds ratio = 1.14. The association of individual-level autonomy on maternal healthcare service utilization was less profound after we controlled for other individual-level and community-level characteristics. Conclusions Our study shows that women’s autonomy was positively associated with maternal healthcare service utilization in Ethiopia. We suggest addressing woman

  1. Utility market penetration assessment of fusion-fission hybrids

    International Nuclear Information System (INIS)

    Jensen, B.K.; Nour, N.E.; Piascik, T.M.

    1981-01-01

    The objective of this paper is to describe the utility generation expansion evaluation procedure and to present the results of a fusion-fission hybrid market penetration assessment in a model of a typical utility system. The analysis addresses the key factors and tradeoffs affecting the utility's evaluation of generation alternatives

  2. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Olson, R.J.

    2002-03-19

    Understanding global-scale ecosystem responses to changing environmental conditions is important both as a scientific question and as the basis for making policy decisions. The confidence in regional models depends on how well the field data used to develop the model represent the region of interest, how well the environmental model driving variables (e.g., vegetation type, climate, and soils associated with a site used to parameterize ecosystem models) represent the region of interest, and how well regional model predictions agree with observed data for the region. To assess the accuracy of global model forecasts of terrestrial carbon cycling, two Ecosystem Model-Data Intercomparison (EMDI) workshops were held (December 1999 and April 2001). The workshops included 17 biogeochemical, satellite-driven, detailed process, and dynamic vegetation global model types. The approach was to run regional or global versions of the models for sites with net primary productivity (NPP) measurements (i.e., not fine-tuned for specific site conditions) and analyze the model-data differences. Extensive worldwide NPP data were assembled with model driver data, including vegetation, climate, and soils data, to perform the intercomparison. This report describes the compilation of NPP estimates for 2,523 sites and 5,164 0.5{sup o}-grid cells under the Global Primary Production Data Initiative (GPPDI) and the results of the EMDI review and outlier analysis that produced a refined set of NPP estimates and model driver data. The EMDI process resulted in 81 Class A sites, 933 Class B sites, and 3,855 Class C cells derived from the original synthesis of NPP measurements and associated driver data. Class A sites represent well-documented study sites that have complete aboveground and below ground NPP measurements. Class B sites represent more numerous ''extensive'' sites with less documentation and site-specific information available. Class C cells represent estimates of

  3. Multiattribute utility theory without expected utility foundations

    NARCIS (Netherlands)

    Wakker, P.P.; Miyamoto, J.

    1996-01-01

    Methods for determining the form of utilities are needed for the implementation of utility theory in specific decisions. An important step forward was achieved when utility theorists characterized useful parametric families of utilities, and simplifying decompositions of multiattribute utilities.

  4. Multiattribute Utility Theory without Expected Utility Foundations

    NARCIS (Netherlands)

    Stiggelbout, A.M.; Wakker, P.P.

    1995-01-01

    Methods for determining the form of utilities are needed for the implementation of utility theory in specific decisions. An important step forward was achieved when utility theorists characterized useful parametric families of utilities, and simplifying decompositions of multiattribute utilities.

  5. Cost utility analysis of endoscopic biliary stent in unresectable hilar cholangiocarcinoma: decision analytic modeling approach.

    Science.gov (United States)

    Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln

    2014-01-01

    Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.

  6. Statistical utility theory for comparison of nuclear versus fossil power plant alternatives

    International Nuclear Information System (INIS)

    Garribba, S.; Ovi, A.

    1977-01-01

    A statistical formulation of utility theory is developed for decision problems concerned with the choice among alternative strategies in electric energy production. Four alternatives are considered: nuclear power, fossil power, solar energy, and conservation policy. Attention is focused on a public electric utility thought of as a rational decision-maker. A framework for decisions is then suggested where the admissible strategies and their possible consequences represent the information available to the decision-maker. Once the objectives of the decision process are assessed, consequences can be quantified in terms of measures of effectiveness. Maximum expected utility is the criterion of choice among alternatives. Steps toward expected values are the evaluation of the multidimensional utility function and the assessment of subjective probabilities for consequences. In this respect, the multiplicative form of the utility function seems less restrictive than the additive form and almost as manageable to implement. Probabilities are expressed through subjective marginal probability density functions given at a discrete number of points. The final stage of the decision model is to establish the value of each strategy. To this scope, expected utilities are computed and scaled. The result is that nuclear power offers the best alternative. 8 figures, 9 tables, 32 references

  7. Testing the utility of three social-cognitive models for predicting objective and self-report physical activity in adults with type 2 diabetes.

    Science.gov (United States)

    Plotnikoff, Ronald C; Lubans, David R; Penfold, Chris M; Courneya, Kerry S

    2014-05-01

    Theory-based interventions to promote physical activity (PA) are more effective than atheoretical approaches; however, the comparative utility of theoretical models is rarely tested in longitudinal designs with multiple time points. Further, there is limited research that has simultaneously tested social-cognitive models with self-report and objective PA measures. The primary aim of this study was to test the predictive ability of three theoretical models (social cognitive theory, theory of planned behaviour, and protection motivation theory) in explaining PA behaviour. Participants were adults with type 2 diabetes (n = 287, 53.8% males, mean age = 61.6 ± 11.8 years). Theoretical constructs across the three theories were tested to prospectively predict PA behaviour (objective and self-report) across three 6-month time intervals (baseline-6, 6-12, 12-18 months) using structural equation modelling. PA outcomes were steps/3 days (objective) and minutes of MET-weighted PA/week (self-report). The mean proportion of variance in PA explained by these models was 6.5% for objective PA and 8.8% for self-report PA. Direct pathways to PA outcomes were stronger for self-report compared with objective PA. These theories explained a small proportion of the variance in longitudinal PA studies. Theory development to guide interventions for increasing and maintaining PA in adults with type 2 diabetes requires further research with objective measures. Theory integration across social-cognitive models and the inclusion of ecological levels are recommended to further explain PA behaviour change in this population. Statement of contribution What is already known on this subject? Social-cognitive theories are able to explain partial variance for physical activity (PA) behaviour. What does this study add? The testing of three theories in a longitudinal design over 3, 6-month time intervals. The parallel use and comparison of both objective and self-report PA measures in testing these

  8. Developing a Measure of Therapist Adherence to Contingency Management: An Application of the Many-Facet Rasch Model.

    Science.gov (United States)

    Chapman, Jason E; Sheidow, Ashli J; Henggeler, Scott W; Halliday-Boykins, Colleen; Cunningham, Phillippe B

    2008-06-01

    A unique application of the Many-Facet Rasch Model (MFRM) is introduced as the preferred method for evaluating the psychometric properties of a measure of therapist adherence to Contingency Management (CM) treatment of adolescent substance use. The utility of psychometric methods based in Classical Test Theory was limited by complexities of the data, including: (a) ratings provided by multiple informants (i.e., youth, caregivers, and therapists), (b) data from separate research studies, (c) repeated measurements, (d) multiple versions of the questionnaire, and (e) missing data. Two dimensions of CM adherence were supported: adherence to Cognitive Behavioral components and adherence to Monitoring components. The rating scale performed differently for items in these subscales, and of 11 items evaluated, eight were found to perform well. The MFRM is presented as a highly flexible approach that can be used to overcome the limitations of traditional methods in the development of adherence measures for evidence-based practices.

  9. Physical model experiment for wave field measurements by means of laser Doppler vibrometer. Measurement of three components; Laser Doppler shindokei ni yoru butsuri model jikken. Hado sanseibun no kenshutsu

    Energy Technology Data Exchange (ETDEWEB)

    Nishizawa, O; Sato, T [Geological Survey of Japan, Tsukuba (Japan); Lei, X [DIA Consultant Co. Ltd., Tokyo (Japan)

    1997-05-27

    In this experiment, a beam incident from an oblique direction is reflected by a spherical lens toward the direction of incidence. When the surface of a matter is vibrated by elastic waves, the spherical lens comes into a translation motion that accompanies the vibration. It follows accordingly that the vibration on the surface of the matter may be detected by sensing the spherical lens travelling speed. Three components of the vibration may be determined if beams are focused at one spot from three directions. Detection of the S-wave component by LDV (laser Doppler vibrometer) discloses the complicated wave field in a heterogeneous material, and this physical model experiment may be utilized in various fields of study. For instance, information about problems that may surface in the field work may be collected beforehand in a physical model experiment for developing an S-wave-aided probing method. For the study of seismic wave propagation in a complicated three-dimensional ground structure, a numerical model is not enough, and a physical model experiment will be an effective method to fulfill the purpose. In the monitoring of cracks in a rock, again, not only elastic wave velocity but also waveform information collected from a physical model experiment should be fully utilized. 6 refs., 6 figs.

  10. A study of quality measures for protein threading models

    Directory of Open Access Journals (Sweden)

    Rychlewski Leszek

    2001-08-01

    Full Text Available Abstract Background Prediction of protein structures is one of the fundamental challenges in biology today. To fully understand how well different prediction methods perform, it is necessary to use measures that evaluate their performance. Every two years, starting in 1994, the CASP (Critical Assessment of protein Structure Prediction process has been organized to evaluate the ability of different predictors to blindly predict the structure of proteins. To capture different features of the models, several measures have been developed during the CASP processes. However, these measures have not been examined in detail before. In an attempt to develop fully automatic measures that can be used in CASP, as well as in other type of benchmarking experiments, we have compared twenty-one measures. These measures include the measures used in CASP3 and CASP2 as well as have measures introduced later. We have studied their ability to distinguish between the better and worse models submitted to CASP3 and the correlation between them. Results Using a small set of 1340 models for 23 different targets we show that most methods correlate with each other. Most pairs of measures show a correlation coefficient of about 0.5. The correlation is slightly higher for measures of similar types. We found that a significant problem when developing automatic measures is how to deal with proteins of different length. Also the comparisons between different measures is complicated as many measures are dependent on the size of the target. We show that the manual assessment can be reproduced to about 70% using automatic measures. Alignment independent measures, detects slightly more of the models with the correct fold, while alignment dependent measures agree better when selecting the best models for each target. Finally we show that using automatic measures would, to a large extent, reproduce the assessors ranking of the predictors at CASP3. Conclusions We show that given a

  11. Scale construction utilising the Rasch unidimensional measurement model: A measurement of adolescent attitudes towards abortion.

    Science.gov (United States)

    Hendriks, Jacqueline; Fyfe, Sue; Styles, Irene; Skinner, S Rachel; Merriman, Gareth

    2012-01-01

    Measurement scales seeking to quantify latent traits like attitudes, are often developed using traditional psychometric approaches. Application of the Rasch unidimensional measurement model may complement or replace these techniques, as the model can be used to construct scales and check their psychometric properties. If data fit the model, then a scale with invariant measurement properties, including interval-level scores, will have been developed. This paper highlights the unique properties of the Rasch model. Items developed to measure adolescent attitudes towards abortion are used to exemplify the process. Ten attitude and intention items relating to abortion were answered by 406 adolescents aged 12 to 19 years, as part of the "Teen Relationships Study". The sampling framework captured a range of sexual and pregnancy experiences. Items were assessed for fit to the Rasch model including checks for Differential Item Functioning (DIF) by gender, sexual experience or pregnancy experience. Rasch analysis of the original dataset initially demonstrated that some items did not fit the model. Rescoring of one item (B5) and removal of another (L31) resulted in fit, as shown by a non-significant item-trait interaction total chi-square and a mean log residual fit statistic for items of -0.05 (SD=1.43). No DIF existed for the revised scale. However, items did not distinguish as well amongst persons with the most intense attitudes as they did for other persons. A person separation index of 0.82 indicated good reliability. Application of the Rasch model produced a valid and reliable scale measuring adolescent attitudes towards abortion, with stable measurement properties. The Rasch process provided an extensive range of diagnostic information concerning item and person fit, enabling changes to be made to scale items. This example shows the value of the Rasch model in developing scales for both social science and health disciplines.

  12. Patron perception and utilization of an embedded librarian program.

    Science.gov (United States)

    Blake, Lindsay; Ballance, Darra; Davies, Kathy; Gaines, Julie K; Mears, Kim; Shipman, Peter; Connolly-Brown, Maryska; Burchfield, Vicki

    2016-07-01

    The study measured the perceived value of an academic library's embedded librarian service model. The study took place at the health sciences campuses of a research institution. A web-based survey was distributed that asked respondents a series of questions about their utilization of and satisfaction with embedded librarians and services. Over 58% of respondents reported being aware of their embedded librarians, and 95% of these were satisfied with provided services. The overall satisfaction with services was encouraging, but awareness of the embedded program was low, suggesting an overall need for marketing of services.

  13. Are American utilities sorry they went nuclear

    International Nuclear Information System (INIS)

    Dallaire, E.E.

    1981-01-01

    The future of nuclear power in the U.S. will be determined in large measure by what the electric utilities decide to do. Very few electric utilities are planning to build more nuclear power plants. Most utilities feel the 12 to 14 years it takes to build a nuclear plant is simply too long, especially when construction money must be borrowed at record interest rates. Also, there are too many uncertainties regarding what an acceptable plant design is. On the other hand, these utilities also strongly believe that the U.S. needs nuclear power and that it would be imprudent for a utility to put all its energy eggs in one basket, coal

  14. Subjective Probabilities for State-Dependent Continuous Utility

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1987-01-01

    textabstractFor the expected utility model with state dependent utilities, Karni, Schmeidler and Vind (1983) have shown how to recover uniquely the involved subjective probabilities if the preferences, contingent on a hypothetical probability distribution over the state space, are known. This they

  15. Proceedings: 1993 fuel oil utilization workshop

    International Nuclear Information System (INIS)

    1994-08-01

    The primary objective of the Workshop was to utilize the experiences of utility personnel and continue the interchange of information related to fuel oil issues. Participants also identified technical problem areas in which EPRI might best direct its efforts in research and development of fuel oil utilization and to improve oil-fired steam generating systems' performance. Speakers presented specific fuel projects conducted at their particular utilities, important issues in the utilization of fuel oil, studies conducted or currently in the process of being completed, and information on current and future regulations for fuel utilization. Among the major topics addressed at the 1993 Fuel Oil Utilization Workshop were burner and ESP improvements for the reduction of particulate and NO x emissions, practical experience in utilization of low API gravity residual fuel oils, the use of models to predict the spread of oil spills on land, implementing OPA 90 preparedness and response strategies planning, a report on the annual Utility Oil Buyers Conference, ASTM D-396 specification for No. 6 fuel oil, the utilization of Orimulsion reg-sign in utility boilers, recent progress on research addressing unburned carbon and opacity from oil-fired utility boilers, EPRI's hazardous air pollutant monitoring and implications for residual fuel oil, and the feasibility of toxic metals removal from residual fuel oils. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  16. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  17. Predictors of resource utilization in transsphenoidal surgery for Cushing disease.

    Science.gov (United States)

    Little, Andrew S; Chapple, Kristina

    2013-08-01

    The short-term cost associated with subspecialized surgical care is an increasingly important metric and economic concern. This study sought to determine factors associated with hospital charges in patients undergoing transsphenoidal surgery for Cushing disease in an effort to identify the drivers of resource utilization. The authors analyzed the Nationwide Inpatient Sample (NIS) hospital discharge database from 2007 to 2009 to determine factors that influenced hospital charges in patients who had undergone transsphenoidal surgery for Cushing disease. The NIS discharge database approximates a 20% sample of all inpatient admissions to nonfederal US hospitals. A multistep regression model was developed that adjusted for patient demographics, acuity measures, comorbidities, hospital characteristics, and complications. In 116 hospitals, 454 transsphenoidal operations were performed. The mean hospital charge was $48,272 ± $32,060. A multivariate regression model suggested that the primary driver of resource utilization was length of stay (LOS), followed by surgeon volume, hospital characteristics, and postoperative complications. A 1% increase in LOS increased hospital charges by 0.60%. Patient charges were 13% lower when performed by high-volume surgeons compared with low-volume surgeons and 22% lower in large hospitals compared with small hospitals. Hospital charges were 12% lower in cases with no postoperative neurological complications. The proposed model accounted for 46% of hospital charge variance. This analysis of hospital charges in transsphenoidal surgery for Cushing disease suggested that LOS, hospital characteristics, surgeon volume, and postoperative complications are important predictors of resource utilization. These findings may suggest opportunities for improvement.

  18. Utilization of excess wind power in electric vehicles

    International Nuclear Information System (INIS)

    Hennings, Wilfried; Mischinger, Stefan; Linssen, Jochen

    2013-01-01

    This article describes the assessment of future wind power utilization for charging electric vehicles (EVs) in Germany. The potential wind power production in the model years 2020 and 2030 is derived by extrapolating onshore wind power generation and offshore wind speeds measured in 2007 and 2010 to the installed onshore and offshore wind turbine capacities assumed for 2020 and 2030. The energy consumption of an assumed fleet of 1 million EVs in 2020 and 6 million in 2030 is assessed using detailed models of electric vehicles, real world driving cycles and car usage. It is shown that a substantial part of the charging demand of EVs can be met by otherwise unused wind power, depending on the amount of conventional power required for stabilizing the grid. The utilization of wind power is limited by the charging demand of the cars and the bottlenecks in the transmission grid. -- Highlights: •Wind power available for charging depends on minimum required conventional power (must-run). •With 20 GW must-run power, 50% of charging can be met by excess wind power. •Grid bottlenecks decrease charging met by wind power from 50 % to 30 %. •With zero must-run power, only very little wind power is available for charging

  19. FY1995 research report on the survey of cryogenic energy utilization systems for environmentally friendly energy community project. Case studies of LNG cryogenic energy cascade-wise utilization; 1995 nendo kankyo chowagata energycommunity jigyo ni kakawaru reinetsu riyo system kento chosa hokokusho. LNG reinetsu no cascade teki riyo case study

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Japan's import of LNG (liquefied natural gas) has increased in these 15 years from 13-million tons to 43-million tons at a high rate of 2-million tons a year. At present LNG is used only in power generation and town gas business, and its cryogenic feature which may be useful in various fields is not being utilized. In this survey, factors impeding the wider application of the cryogenic energy are investigated, methods for using the energy more widely and mechanisms required therefor are studied, and discussion is made about the feasibility of the utilization of the energy in a cascade-wise form under the environmentally friendly energy community project. Researches are conducted and the results are evaluated in a study carried out on the comprehensive utilization of LNG cryogenic energy. These researches involve the actualities and trends of LNG cryogenic energy utilization in Japan; current status and prospect of the involvement of LNG bases with their neighboring industries and communities; technological measures for cryogenic energy utilization; technological measures related to low-temperature media and cold heat transportation systems; technological measures for the cascade-wise multidirectional utilization of cryogenic energy; and case studies on assumed local models. (NEDO)

  20. Present and possible utilization of PUSPATI reactor

    International Nuclear Information System (INIS)

    Gui Ah Auu.

    1983-01-01

    The utilization of PUSPATI TRIGA Mark II Reactor (PTR) has increased reasonably well since its commissioning last year. PTR was used mainly for training of operators, neutron flux measurements and neutron activation analysis. However, the present utilization data indicates that further increase in PTR utilization to include teaching and the usage of the beam ports is desirable. Some possible areas of PTR applications in the future in relevance to our needs are also described in this paper. (author)